var/home/core/zuul-output/0000755000175000017500000000000015136705457014542 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136715103015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000313324315136714725020272 0ustar corecoreՙ{ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ~泶i.߷;U/;?FެxۻfW޾n^83|𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40tr>PYD'vt'oI¢w}o٬owko%gQ(%t#NL֜ eh&Ƨ,RH 4*,!SD 1Ed_wkxdL3F;/u7Taqu5Ոӄp\2dd$YLYG(#?%U?hB\;ErE& SOZXHBWy|iZ~hal\t2Hgb*t--ߖ|Hp(-J C?>:zR{܃ lM6_OފߍO1nԝG?ƥF%QV5pDVHwԡ/.2h{qۀK8yUOdssdMvw`21ɻ]/ƛ"@8(PN_,_0;o_x+Vy<h\dN9:bġ7 -Pwȹl;M@n̞Qj_P\ Q]GcPN;e7Vtś98m1<:|a+.:a4nՒ,]LF0);I$>ga5"f[B[fhT/ɾgm\Sj#3hEEH*Nf äE@O0~y[쾋t=iYhșC 5ܩa!ǛfGtzz*з 55E9Fa?Zk80ݞN|:AОNo;Ⱦzu\0Ac/T%;m ~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d%1}Uhs;H?)M"뛲@.Cs*H _0:P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6,֗&̴+s~x?53!}~Z[F)RH?uvͪ _5l *7h?cF_]CNnW)F5d,0SSNK9ް4:ÒozsB<^+鄌4:B%cXhK I}!5 YM%o<>"ہ)Za@Ι}YJz{ɛr|hxY/O$Zøu32EʉD'MS1}t i:Y`cФIX0$lη˽`!i:ګPSPٔ3@5;ȕ}PkڪH9' |":", 1Ҫ8 %lg&:2JC!Mjܽ#`PJWP4Q2:IGӸۡshN+60#:mufe߿~Y,iǑ wVq*T+ w%fx6 %u̩1hӰc%AYW ZY~a_6_yWf`rVA,f=A}h&V<UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧC`eUKyZ[9Т%ɺ)4(m};dмDb}*F%fN| 72 YͩnSGZ+X: @ty1ƘKTB2O:`MLީkx,*˺/>`!r"o]㣣ݰBb%9L` LN.w$S`^!sd 2zX{v d.L y{tkogRib}1Flڣۥ,j|=lZۦ/ۣ*N]wtB22fW-p>5}ްWSi`0/!]$w JHJ ^u,!sO4-[HXBFHb}蜱Ѳ>Wr1z: t2v tD#D|Ѐ>|T}~sY#2+t6GRX B,b dMʳ!՞ nt6ZFq.:RX`+$/SYJXC**z oB6%m kYY^KUB|37'XCD5]D$khA (5n.& )lk[j-s.p~j:}u3xW~Nҗe&#`6 YJ|EOxiY!9ao&#zҥ7{{29b$eC`YvxȲ4w$û#y~qE7F0g,KY%z8oc3K Ad:^=׺ԕ %&gY'(6FW@\^Ũ W|~=#lOf&GY#:YU) cV'N2ʄF= qd^pE#]p7AZ'އ1dNu-&eMס>븣P)h$Sn =ocRvW$QtI§/Ay܉Xgu!XJPuSHFRN YA*K'6FYF$0wACW\vaIAPjv 8&f I.zؑUA;]ahNV/Wul80"/9 3/) _B-dGyߵIk*[u (#xxǦƷJ$"k5*s58cA{:xUTu߾m~^(xoOv=-= Y!T { ̷Xi|Eam']Ѽas[JοNTvP{ @t@6dG8"ﶎoM1֬IEdXw`6h#5Vzc*F ߎ_1 .)5%WV{a7 K7,y^l6vA)x7@{jn5F>^"` 6FQs2cuOE B\pBjʑb&g:ûq*6=aKRAvM*5]=?FěSҰ~3V%VqáY.ua4bIxrR}v9Ȗ]zJ(qk5zcvDG ,u Q|axo7zC4 Gq"N=Ggc{:/FIߙ%H~rJhY]n@2,6ty+b#'޴7oOԦ=a&!0P$d/XL']Ƞe@#]#el̆ yOE,Itd#z! Ys5w/LѽSϠKC2:cArh>_RwN37'?~=>{_o٦>%ix"CEg֦$#xHa>&3-Mv0v]Iߟvr}g5X+b!/Y#gѡdoy)kOtHo}}յ> ,YT佴ڠ=GI1܃V(u3c19? ]&ԬoԴ%FޮD :pq:C\b#yXsRE>L=O%vnqj.⬌/tJZ]R-wW_U=̬NYeG]&U5]9yJV,\GKIiW烷8n?Sg;c)G gcl\f kREI~ۻ|܃JA<[/nza4nΌ6NyZ֞}Vˀ}Vq~g `^aaþ4R"c10Gt=Kqb;NL%'ө TP:;o=/Omk |愘X#VwOǒQ  2 ĻK{;thozʆzDT4Xsί'K K)$;z^ _'\0Ps֝d>]C֎։0 II=Z0e-֣J.ïC彖QbEVUeՔq{؆/;T4H{+@;8_Y:, o[UϷcsی(yUH^+(VHE h - ]Lccޒo򼯅7Ӿ+ LD鉀ά)ic`/6T Zt3QiB*_tlCDo&w xeew/Y3K7+-J{*DS)2Շxq?* Z,@r|wKHk*MEd`cR]Fxu) `J(hjnʵ&n+=x8}> 1xq *VW|:SSZ_DoRlC ~IWY&gJL5yRcXj>J[HhaE=yR%92HeMc@Ѝʣ@Pd0"c^d-1IXU Nu @攚H5F ldU Dx& 7)jhWd1O("#b3A;&VҤ r%YOѿDB8NU$e 8gk#ǥVXW Ahn+WQC !|&'@x7ͤ}Z?mgg 恤F*v:p Q&qޓ{"3.FIdKG %(_97zqo>Oh6+|lNwQ ۊ*y@:\ lX$ki_ >eW'mǗ-ߟH Q/l6DT~IU}&ݒ*1Bn>6)Ґ Ej!3?dtR\ŕ<՝ }ҀR*Sp%aAyu4.aHV'Ki_cFh_L>/Ce/d},)E5$=3[`M7IIK OP~jgSD+\G:h! lu+?ʢMےrDEyuԏbׯ-S`o}e{U Y |w(Ֆ݉_PgY9,)xIJPѥ^ djBXʇ);_x9].m|RY0d{g&ζ+Ҥrx ђ{L?z0US4yB^OKx@o> 2|J0êϓ"O+mmBzp-p޴&Z'Y-B6-[4 u>r,8?>7uCC5F %Ն R Cu8?28¢E We; P0:"nlMvOezR= ]â. U`V% CQX v#'Nv%j1^ܒZB$*c.)H ?`5[Z!}nliצ) ݆Y~ cPwίX"{!9V0~\`#U8V$}bpNU`ZS̯窜8L^O+m-q6E¶8^ SxR1Xj~=rs>NdMLmtїUJ8" kcMlf2ǽyWLiXCaSmMÙBgr7[ Nʇ)bAgX'f0]agB-:YSokUdE6c>Ql~JJ#`~#Eh3ŕs,|HrVh7m]Q!ӥSVB l)DzJ |0M>8l WIf|\8U*hг˅r-3'^\  [Cr: QvOS}ll>ŰAVG YңK\rnɪq(u$Bk|TUН}h!8l?= S7!8bdEuK<^F hxndSD,Xt]1Gm*h%)(=XUza^&6"BzξH( ."uw>.,SzbQ!g:0r$ضz['.!-r"1MCMu(kP|еcLqr/Hi%(azry !5yY~ :t>gY #)v緢!BY)Hc(<|qDOIW^9oJb^;sö!`hDZoe@JLw9|fb>ѺRce w0/ qo%HWs\4ݞtf@Զ߼/ȇlCf0C`~ ƾ}O+D7 P=xD`^_[d0Yݎ@z BO2۽k%xe'NԋKI`Yu>7v$Rf[A'2)b>~w!I.WwϏߍ|~x(qUߤ^ӵyx^TE.7Cu/Z6m.Ǖօ^ߋӽfz./ZܗWrn*up+t\OW U-C_wS!|q?E-S_w$-%9?nwh{R 5mZM{=x$ FM8G0Z'Y-X )@V7G)zL~dپ&+<вFzr?㘟,q~,b6)hRgbiRgH1_'9(ɟSYpŘ-ŶםG+qY1౬CxAX4xK"5XoITd E$Z[NdwS:֢̆ ?GЅ'JƖ'ZX t{O֠U sVAqFJc,Ih: ݢkNwE 4eaK0Z / > <|d= }L3Qf xdRhEB{s \ d5;*Ijdɥ(ȊV£55N ۃm-߂8 `6w+ x-f  E\ڈ t-nɭ\h%^wWt!{q.lrDsΒV I-*F~L!Gf"Lf0OWv?"1] `W+7m4e逼p +{6g߷Pg%,IR-Ř`QbmүcH&CLlv`ScivG'հu7.c 61spH5SCt)eNqǪP@o`c/#rv6*;WJ;[.)4R\=V~ׅ@~>_)iqT 8;DQs@4¤>mlN"jek-R9~ {]'##AAwLѲVdJ.ԫiE׬܏ȱlR~voWP~ RTM#rQίE|s`poSfbpnVGIGTq3JC`,$P_KxB[cz)ut0F>v1AWHй qPq3TN9W⌎I Nʇ)bHb.uD`X}nl}\hSؗNTlѩyۢUdE1LӰWlU&Z!hoҒ"HgKX 6 -{zwҌleb}nltwfCEI"*o'Cpl0\neή]kY8Cgcu1!_Ɩ} 8$ Ўa 2+ም+A[QA%Oع „&䒿e;]0:|IIc(R&#ӓecE,dp&nPS '4쾔h|OO-D#ڥ4bZsH{ g3JDcSKu9k-nUAG`2y*8F*-Rz_zҗЩsd6|`U>F*KZmR;~:xI/iFIu]dsEGزM+ycF;bH,te[(6Fƈ^ʮd3 b`[c1髰?(o$[e(l^ȣg3K?nEq!*RZMCpnse~ Fzq.8vz;j}ln2e UZAZv+QD4ͺ|@lX)ĕ̽=Tse+ϙK$s`hnOcE(T#|\*& >Vv:V^4R7[{u%[^3GGL']9~yL :!Szbvc2.R3`Hz)OfJlӪVWF fK6%OOE-kmW`l[ۯc_^ ޱpcoK{[#U jt*h2V߳`JBNcFSUȿ1:WѥLf2HHh)b>~w!ޤ&\UxsnnqCq]w߃뽽){Q=\rw*}-Om )l?7zju_݋yZTuusn Qww^"W|{_ӎ'|;.n@7 >I m-8NJ\NA څB}>Mh/A8?_;qŽ6xmPf_Sp6aMUAP*tLn$l{=lq&Ϻ/j璳PܸRKm;RCsT$Un2@ "G{}ZI>U֞ = ƹ%>Hŋr{6_Ȗ4zK>r6[uO4u4ݒ 6ƸAɩпisf:z'.RٍM?M68XU",GbBUT $;Jb2ןl)*L.%H0VXD.zo<\DƏUXQRi(2殴)T" DʛZ'0ZŨy4eU?7SYq{\~*ϣB6Id S{x !2?Z5H2A$Gk)Xb7y@FO bq7x7?XUᦙZ:Šɣtt9Xoϣ AO=… Kc|Ϣ`LtӢ^")(/Ii\{ sʫJ WNI={:%e+ON`3n/C0Ɨ4Bu2"Cf uqen\.Ӱbz?6W'o ^W63uqiMaQv"'C#9nY1hҚir4Ykqœ=h,뙧;~9zoBd/ɗmsޠ^`Mh,_nZ rĨI&uO,( Z8LHM\27W WqdAd&qSW)e!w!>aQפiQ =; l5+Msq}; #yƋt{uൺD2$g }18MnyaZs̞2,׎S4fviX֣^?b@x%I8pi_y|Ld2@7Ѵ[ִ7Ʉ& (DO"QSћɿI5)x3PrR-deL^I_\@aMBu1~Ue:9bz; Yͫj֮%oIU{#$$$9^鳥&[R]ZԔ{XBV~v/ Y{q=4E"K(!M%D;/MZ'E}%K"ݗ珟Zq8?!}縮|>h3On#x+Ԓ3HeF8 ahy6#z'eݺ  W %a=y4< L%_ʎ:'aaTe+ְm9 lOSkwR;^B?9 +^BfT4WgH4D{ ЎdO{A7GIB8T (JpexlГs vwb!:W5xJoo;4P>!Dݱi%}i8^rXRxeSԷZs^g'Ww H`UwW} A!Ը!eO'iJliVsɳtBg; 2Mh,ОOEt|guUWuuUZk/ p?`1sgI:X$`Ynvª,_IJ Uv*>Qt͈aR0űL~pۉ(B:<ЧH\CA ܕ&x,p#iX@?_ht=&MQNuW ŢHu%-ă+ZdI}x`DYpޭQKkW"XHyyx󴢉^\Z;^ѐ,ODWWDso V6YixS<&D^ÃYQpVavYHf3Exj(U FH6>O{lV x\9 t/c&' 4RۆToѾjw`; # B/t5VQL[0glC25t-Q[[8;ʚR',$W6wF]@5%1 k-ƶ0^<ɨ\ t 7UҽNL\qogПA2)t:5 &r--JWx>"E(2dTEKhW1]^zhiPUGKiU *iUgW&1]Z=koZZ)֟ɰЖT8n+ܴ4-߷a܏c2/|+0eǾFzo1zV0.<݉Ʈ5tَ $";8{ʛ9*j}e-^vrnB1CWظndެM#iX*mpQSfp!& @ْڸ}~L":U2sWK&sוliTESQ!ݒXS9/iJ}{l<|vHW$#=<5\ FPm&v'oze<RYտϺa, RomD v@ڨ3|!zTmnDʬX]u&dAO&M(aW0ުf|dqxDGA#"{3~/,+PHY+i >O]ܘ̳EG~ 18漛C*LO*qnpFBTK?k>.Pv0AwhzF {l4yӇi 6}A\yV)ozgy[K.Tj62eLC\ipE*D PR "UK%UYTo?QƘa1*D~C %n9-Kȫ,!ΤQP9[\.3ꚩ3ѳjq#3{Kæ:ck w'^ -c6ᾪa/(Ξ -,iCM^̓$2p?2 9{4 m7F\C/69WC|c0>9 `z;"9~ܣ~K(d2/=< }& 10zYٷaq=ƜFXI_xƞ(7ޒ9O fh1W*ۣ 5؞\P.\H`s BagQ3ڛπ~1̥T H 6nyљړߗ!uqsi q'3볽6cylvBcD, dazѰ3OpQ{uOxѕژ1 /;yzRl?QZ.{V 'c,O3\oݽLjXԎٟ3qRf?Eɪ"p}&mxefϹb}m[(0$gޕ/Qy?~y`B O(\!X|sU@EG%]"\E%W w=Fzc]sP6Ιm芾بZI ;i,QϤ"P%/`0-j)(V6eP|r.YPd,Χ:  O@ba~-a 3Y鹢#N-[(9u:CnWp*Ǻ @|% Ha'<-H0T_>mBײd/[n,JiINABo=Xm8-#ѥd4вڍx]"o9- Ih5`8Z  ,VCFZ/ ?;EGe#?ࡻ*7 a^{~{x6c@dY,l<i쏸םK~B4윇%BܲgE΃/e:ZOeO%_0.hY  m3\u"f V t"]ҧ[9xpXf"(}I Z [{6 a;֖P 灠aؕ*A#/ -EcA [{w%S0x\`@ނ1kGBŵeB1u34Fgg, m#Ận$m[@$:+\ s9 7芷pF]ۭv.% /'V<.³S φ;VCx6- /r;,<ޙpMav ,ӁdɑiԹFg4(dKc JB[=9e&0ۯoo-Θee<g )R5)pVfCX.E"S}Cl, `c$ҮNDR+2- ƼN[+ֹd3%_,1ߒYRTdT j0. v0R4R9Tڟ½Jḫ`W,ܱiD)X6W)CKfی 8ąO_Xd|32= fVTK.uu{OhnL9'qYdc8tsc4/e(x3ډ$FbdsΛ4A~p܌.@gm,muL},M{NPA>O+ZW;5]0>3tH M/D7UUؼGM[8P%rJ_8 )1wdTKZ&MfIL%0 8Yt19.9\: X9Uqᒞ+X?91Aಥn{0q/mT=3 ݽըA}j0=CvbwX}w1C]dX=Q5zuEISDfUDž"^6dCzt#N6iĖŽ6GIHn W/' ^g%y%]V1%b ?]|2d|c'Y^u3nꆼfoo_o.yM+7X`uׇw"UWV4?8Bl)xp>@s.m]LMZ,6?0n3eᏰy sFE&{}X P|p|[>u#?zM/&4mS)^[yMAt#89'eu~)O TV#oGQUPkT؟^w_]誛]EfE~_F}DK.5_-e<(O}Ă!gaI)9Y`xC>2KLz<܈Q̧I*]6fQBG~!u =D:iȇ# 5sT ͧ~?+SlG9p\ H_z=ouAsR(à4,Y2 /+>`qga$B@os' vr74-VyaFW{ގ֙۲]6'e@wk Aa<3NV:uN>^^!z#k,ȯU>t Dp \U.i жnAEuOCi~!n [PN(ہP;iGv!Δ]=bRg=vZNzwbwLSѓԺ;N4RGR@NzO#{$;N4BG@hNhO#4x$넆;Nh4BG@hNhFO#4z$uB O#?PbPbwB$Tl!tE:訦.K Ih˴tw(qhC#Osg t0qUev!:b~Y.$(z8w /GbG$tTwK@ҫoQM:GRLqwg~|787<<} U'4>Mޟ9y̪ɯtq#Eg/>Z(?QAĠ?>\>9HԈn<8>ЌDb^M5ʅ!YU,T/G%\M?<ßUQc7bof /,En9!ګsL9~QXmCԂzOeUho`Ս h&F Y"y:'C:uTS:|'vAZ( @i>HazJ@(Z=y~o' !^fDRuԑ!?,L!.â |uY :%1̱ 8ɥ3ShQp3h럯7QY9΃vCY2wx|;ЗKg$q]`U+/GHEt1o`>d#,_g q݌BCM95돗-$ Ӗb%C }W%i(bz㤘:PH=*F@BV^rz(8pUb_9`#i92"o>ru3rR3La;k-Cd W"liw{ނ MGzcY*"-&p6p.|dL Aݡx)~ogmι(\wne1\Sdy8E x2SX1>QĔI;  Ψ^zd/6}_ʞBto꣛VcW?axUERvRUdrs0'!;R)nt̥!1e,z7Ӟ},nV>نK7*y̤r{Eq 9\4fz'v;څ 4i dZ&'-!X> 38\ :'stTM核jP.r\TD]FFer1B.8"a߰ Ye5< W4O W:񮤠ơXm^gٵ<N+ =_]&eA(YE< PwV,yrU>ͭ\gtE9(܇|=[Jc2Rǰ3`\ɻ|NvFw`-?;NsiE~e^}1^@Jt$R366CZG rx}T@Bݗ7{& Jd?z~Y-3d趾 =FbFל!r;i%r }l[Y7q: ~Оf&0:ZxKnL`m˫5"d~ezꦧj3Nk`x Rd|1}zU>+yA{.zޓn+Ha4v`xSie%uQuakNV^O*(>;TFZkTKjVQߓdfIUV,Do_6β9IX4{btt-5)IMVPO9 -P?n&*/#k*.Cy>t#s0r8Ȏ8lbAAcF'Z{:_jb }~J(!iV՗t#re-){]:us{\U,dbJuӯkϫ_'/~cXo{R UnWlW>mտ۰/Wը(o^K\W\Ovq1OO ~ra戳r?k~57z}ay5G|OxebW:h_~,-_y`˄,J杗M3/Tߚz˙3*Q~B!D¨>lrK&c[jڙj T_N^K`0O,d:AR7B7L:!0" ^x !L6>;2ӀD;KpU#pJ_uP"bQ$ASvnhvtPB@=TELJD=9AX9C/;4ӌnIBWP. ( *dՋaVˏ|-HzlifM6)K R"(xnlUֵjwCtןwԜ7ujG'z G#O]3c#&m% j-k>:bp9 GI3cD{ ;׼O" K1R肠tݬxTKUEv)pVc>88$Q;\bhD&ZZSU8gC<BVs fimBKAZx;} ^LI & <2]c1&g#D N߯6G1ؽa M4Ip'mFm$-mJU eO8|Cr6AYGƦkT Vϴj0p*WŸNc"Eb6P1uAPWm}uኆ` ^&hwnX]RNV, \_THMchM0y1tJX_8Δ` BRY( m #`GmytnH~ TQq^޵6W1PIGHUK3~[ >beГӌxUs)\fELk>`1rK Q1Ct<= HN4=`|qi&Hxy̒ui'ř,ޢӚD % d Uwʂ A䆧dL 8ļNqc ݦ ͠NVKתdfuġ1$_v뷞Ż]cR?6J:؆~M4vp|*KBoјˉ,_K]nbMhCMvZB?db+EPvqp a\:vIE'qm8F)0TlJZ '5GRD2YqH:ݍ낣qg54iL?Ev_6(_h97HKucSJoSꬊ/RNyA785͢mm4ִ6A}5EҬRƩoӇ$Uhn8xfsxTC:q6M%-agሾjsyVW Oҭ>,鄝{VZ+o3rG`19#x8)VhgYRWZ"iV#d{ār:co$s1 W{c*8u6Wu%;6ʯ6]pL"asuF >U!l2E!?إ*/:2Aٓ39o4uB5)~ 1ģH:n˴w]B<9|jo>oG[=_򔕞+r0`OkE^:}_?5 >u;/ ų?Lxtb7,Yl!r5ѠiP-ά(hySPp4Q;FQ$6kl(YHXǚ~zZwQ@:cFo9KTz0vеMƛoښX,Ë1@tt}չ.88ʀ[ "OY-coƣ1." ܇~4K͙)枴BTrI/+-"J}x`N!7ʟW C2I=nfrc4YmB~aA5{#PuEVp, iD1TnJeYi-m-H6)Dj#p.X.ڙ4c4'QԥXfpkj(o19t*"J{$Hfhs^fM28r*SüɆ@/hJų1tPRf)2P$۔m!l3en_胣;%E=ʞ,Isc]D31d}EwlS EֈS65".ny^I!%sDfG/y'{MPfUGA$u$?;R9霐P>Oi8}c~_?l:Q3¸C =qH/|Ec &pQ K+'0;no'>1P;֛/wSF,,΋8|oىBB¶У)]3vVtkM{.9?8ģ'%sƼP"qi $ݯozq^?nI)w'2Fj笓mhH6UHȖJItӛ.5'?|T2pdMV?P3)+#"a`pWtxjv.{x=KjËR8|_fˤ>eJ yMY\@Bl07*mX%qzQ;ZW8@,O8jER{ ^ppěeh/tP"Ch@C801(} w CNfGX7o$۴h.[֧3lp36m#c9gUmĤT:G*RgP^:Υb zo٠ROsmU(.H[1@>$Ϭ 9@e<1ۂ3t(j*[ҒCan(?|z OEzf~We $ʗ'<N򨒯%͐RiѢ8F>b&t|ayߞ] IW:?zKBfoʞ#rt{ș,;?zUL׈oi47.($Htt?=V]pKm䄎zn I+k ́R;\"t:?8d5O1BEJ j5ZPASO/7 iG/a{W4afI&/d7\V.(f"9[swmp<x?4׷I]϶t< Ƽի(ى$S/dd[$Y#y\XÙw ܤ˜D3cփwG[>a効7Wg\R'ێ,c*nzİ ԣT Կ|ZfE>ji-/sRc6⥐-ϷyRkOX榻"r[o}2mrMIPb[4pfZSծ+&.*Wo6ݖc402tDiXit8 061%Zi&VHi=8.<#WnGܮRSZ05K"|G-0.)dy?/.k|h܂? ==e4N`]cFN R~KB~tAo؅sRU?p"Uze1$\z^þys%Ua&Ï^mM2e'(Sk'=N/HAK5,oeWY6[Mz2/~?W#ߌ\E&j^KEm|qdaAwl܏ 1*u_󆣬^~>M7 Grԋ\ F `a?0~ ?-5|ptOίY.D#zC3\eK'yus!_&`#"<8JgJu `0\Wwz3ס0_: _<ďFe.I{@ țqriT&7~|`^tǗ7nxw{ ^E&ge%YlVqw-]1rI$3Λh I2y1N~ߔ/k|\âsr0cPvc4\(N0>'6oy/r2Y 🢃N`$c;L0"NJ%߇%XW*[ZQss҄*wL{] wF?iA _J4Za*? $J&qS7ꮺVy%"8u=m5ɠhMZ&+=j4 Y4ֳѠOfL]Dő ezYNJ[ìIZ_0>%!K[pm槿>qt/7F_LtKsH~u"첖9XS(tj̔SWՕ ЕȫUM{(aq/@%gf N 7  ԃ/8遉3s-8[M|\D2B b_W1~Vb qղ?quq~i|( $%_WlW77BNbeT]٦?,8+'H՜X(~/O `Sc4Iכ'ϼɇMF>OA|:ʾ$rOǣQ$~ƒ#m(\O1*SZk==`k%0" SԞN  [waLXmAD&}Wn?ʳ]dP5{Q-/Sq󻃰2; ~U\؁+vG:Ò[D0NP!32c=6M4|Iǎ^܍1xH-Vz@gyTLʺB{珙$"`DiY!L]|hA&3,j^fXS0c{O#s&0K5"De3".' NG7; \VIos".)]IG εutˏ)()Q9V9e<3TԮiQj&w Mf2𾨕Ww^:NǼQ.Sl%AK|1,}ױ Y~w \]USEazU^X=Jkwіl-t2+ϨC" Xɗ,ꦺ~)!z#0hqH<azR fvGey79,5WAB::H9|&:" +{jmr QKtĥw n*l=vI`u=pDЈ<]pB`Rqmoا@5iGd"-~}\AF5(rS@MUٴxH0WF$Hmym-;5 g CPQm.',q"=Cc}/㖖"˻"%mSṧ)غSop'ؓ7%J\=g \{O51r0vOzPS\rnCY= -<aD$ #{\P)'/\>xCOW(Lؖl盷Pզ-`Myk6o Mwl)%RGt 8A}4mva]js2Nk$k9'[o<QlW) ՑΥ*f;(fm|pqͭ(^X@Q"N3 1,9BY8^f4E ޔgL}ꔧ]'YwG6(NS]Ao0k,O45ghp 3.uΐ2%6$y/5i/YKC=5Tr}Vj VfKsFӎqF O5g,9S>3a*PK$N;8)ŊIKHɥ9iG8i 8IKE)m: HYR'("~)癢 }ƒB;;  wzymҰfڤas,g,0; B&S5w~jx* ;zI'R"1vW3ebYz)\2 ,p~pN8wD$P.`x{r_cc_X@5v^>>}5ɠ_.X>(B0W.M`xeXs.F݆ao{kN0(=XPIJqxa}Jfr<  U ^r1GLqYw/drf/jV' gl< E\^LMüL%yD"Gᴃhzq&vu̪Ggl0ӧތct׎6աxaS)* ^Zя5; X;_L ͷeY\yi4uE>5Ȼ'<kg]XЦ^#@r̀~10߂25 5+ƩriEA-#[\ٔ(Sͨ N2%X*B8UcY|MI:> q, KeC| AH-_-NfopÏ~l:X!HP5>N?UT=3sYzнSx3?}?;N_O޽m^'2:of?suf|9WM_Ft,/n'a|1 }Su/H'fTʁȱ ;QN-#GeH.2,jNN0F?54"\ua t˘!"D`?Dނq{<ܙ]Őx`L|,$0n. m,η$b'F_³_WLp|1/n6,0ۚ6o lWe|jb^x_R36H[g[ û]𥘞c9bcLLPa/(V RӖ[ ~ Ww0TLlg] wF}6?.,*dZ_,j3W$WfAWVeAWk7->Yƅ0k\Z h-}f'y%MYX\@rP(S# Fm d5}0l>U"4Z$+05@ɵeU^)/*i}SMR^_IJr$%zLV`>\ὙdՓJN62?S>/g/Og)۱&RNb)aR)'"SY)25KSoUs6= Qg轷y=x?5sMZd0́\b03!1k s/d\Mbh L: `E*geSd0FTkP[@|SVI>uϰSH>Y[X|6+7x4R3LkI V o\3D (R[Uƹ#'xofFc>5IrBZe[(m@Bm\I97?PdLha8cqۤL.\&̩4KR<:@!:5WXvbTQPz-wa#M`Ҍ0*JB fi022| XτM%ħ:sMWIZU<؈ufHI#E+m(d HP{Ŵ61-Tic YSd>eb;gQ*%x 6̜C[f!}/᳎:[Z(؀HNB8d"kT kai@G펡K6sW!#Dx;o82w pCrdBVH儧)xxscPd3ÐD: (d|`e0`2ɔa<)^b0m$hOD0tVj+U"QATSo ٻ޶$W0"_- 6n6gp%A@6%'jJ2e[6)ZˇEOWWWWuW=MMR2+<.iL &UIefHJUn{t Ȅ2<ǂ&1[%$$Uy%b$I2myveB=Qٶ0Ywz4sJJ/C\'.e"#[ye,ͤJR%I1} /"VyekԣQ9AaN97ZSfDSY42$Έ$Mv@)}CL:$S&M|2&But5Q;K(g{w% L6,l沶'1 ԲUf%\aa~G6|m[.e/d>l < |џed^4z@B4ɻee0Edmk8!F^~ ] ڹWXrݴ^vLŎ207CdHxE;b`5J{n6 Lm{=q`)7VJ1>lD$DXc)%JGo1dM'f`Rh)v"îdfIcEaK6w 0/ƹxp+ϲ 37?5?husuy%/]Z_gfO8eH--FJA&U^X8Um}G&a<~ZLBF#٨zo&yI\Hv;:\8EϚ4%x&SVHW'c.SLJ;"rnR|b<OQt{-*.JD_4Z> /X& ϳ`نҨq+7dQhb Ŀ3hL@ኸ&W\E / ԣ*g/zx>CxMϷ7%??Dzk#̧_~.fM ൽKԄAm̲LwȼmVf';ZZSqz_M岌peluᕧ0vDuw>6ko q-Sr斊{ꎕT*N)Gx{W\peTFW<y7UB:'ct ͯ p9|?B#6&qc5q^Qpx-=m&}Z|܀M&@~C`-c4|Ԋa-AGi]gx5G ؑ0XDwF-/|gF+u(OW .&:c'd6q##( Mi<3魘,i)Lfd[r.rMh ㌙ܗ]RaH+ݡC;L;UC TZטg +#-dzvcއ͐|J2NM{\j-sՃkZ.)mЄi]9YAJķ->= S gXNwN u[`JN;h&[){T])RYNX; v=ƔWaKD"0ʕ{ S^\Vs!2SLjʁT1 ac[bUᏯ+B ii­SznM5zpmn x&;|aױ5L.E63CZqC6ZEU7\DT0hgqȟG_ǓI4 zF>LgAKt(~x0-JVe/(lWEO D*Jt*J3H"\^[U;ʫCE[wWt<[6%MA.k.*i*}]TL}+)z 0O\,'0rygk ܯ`ƀkA` F!KʨT-[ F5z%j(7R LŨ(~P'o^ǘy(z} }6PEu° gxi 6tİ{bahói S|6JZM)0 <eZgW9ŋv2/2迴LhdOdyV?]z|S ,|$L-4k@k 2@i'r3`q6E<^ݻ2;9Hid+҅JTob6_f^ZR_xkGٲXrӨ@Zx3 h+COnEզ"K@.,[yjK݊ԭ\FW^NQW:.VoK'G'x~5$OMgx҃OjAiCixi=Q`V@`1oEvs%y"zp"lh: n9zvX-J5Agn'%Uwr[JV%u6҇+8_,pm""&Sy{~on^KTgڦ})O445jx*㱇d!o~ ByadL/RbL,%`*aˆocħE7 ;r{Eo|>sE[x:k#*?]As^,x X?N?᧗h5IDm4l< ~){R|6[<8ynPAv-*OprV[E=oY XnFηR0w!eM5*C.)aBwxJhT:|#8*҂HP.s2$Ս; l80y_Ǐ7ċQTY|'_?rx=`9w2q7}w-.f.D%g!aCYo7 ] NV'Z|gp=no0`ujxm-UX&v̋$4ċ+uEn}ڼ:My+6R+ҟnla"r~[W[y=>T*ҋ>fbF&qD$Xb=1x`4)bwH3 γ81ɦ=we#6{><X؋VTTn:XR(Rt)A~/?7~IrWg\lV]/LrZ<Ύ߾[#?ڙ}\kuoGg s6D﯏:x͡ߌͯ=gpu~"γDl[nV|T/1"?(/7Zl{?! uںZt}/8cG F,yzlj%>.7O?fY[m]SukNO@CwmP[ijK9{m`5gAW9|Zڧ@~X| h;O>;#E]*8)*>\~ޅb}neU/-IUng,'Z]Zw3#Zq{7 9)+⼵DJH֓Ve#aϧ^=##M#xTiVEy; uuoZ[`=v~|WALi a78BflF?|㢭SV7Ȃ1bUm} Xտ:1)06=V{)3s"5D %tE+E#Goϟ?^ C=^맧`~X]s$]%F9@`Uiu 2vcx Ǒܩ wV!P#ڦDdҒ+8*RԉtAтG>v?|S;S ;%%S`U:`h;[滇7n?:Uf*zX,SEtܺ*h9ᾃt fPx0?IBsF +@#0/f?v?%+sed|ukm+s&LA DJk%LCr 8y |g[!te,J+4ޒ`" zSYXJ<݂G>fG?&o1f Jo3/Y35\-)$U I;ɜV<Zj0@KTDV$O0L}ݤrr,H\̃-pqC(OGL 2jЂF[❌3Ųv]ģ | t7mHpYIpU2Hg`E9O63d{ȇbGֵdSnmA=8CŸԐ&IJW*)Y^#&il6͆"ԓ@~  U$Y N`3$Xl\gK:d4#x!G\״U!{~sNasQ:`d2鐱ã9|(Myq#U>f`OhYonpZ3>Pth!m|(n:u!ԆL'9! i`Toe hA#9#ު:Q,=Yysc3/4gTwoB>v5=N:{mDHaE&ʂpG}6+)m#E>֮ɖg^F4/laI3Vzހʕ1bc24;*e 6(rQZdn ;6sM]?qPcjk(򡠶7w1]}/_UQЬ7p`63Fqn$a9G!c9G rPt!dI^zMh} mgm"j%Sm+9#x( {WQŴdxv3Y=`[#8br$?qM݋yC>Tƾs+bp"N2XD cHpђ*g%ŷںE>Z&LOU_?$aGQ&t˪{br2Jv ˋ)E:cJK&1>1NٻSMiOE\.3AkLGǶ"8jDS$Z$X_ZTp qKS4[ 7fStsD?i~xzY[5'Jk*׌68(qe#L^{Nw(zj:fnә,4Q:09sN@F0h{Q6tx#E!8٘A}/zQsyZ|Y_F"X1! dT$*  Fg10t;yקΚg`m$V'C>ǐQjq?#0&a=Y]J J,8O~\k[Q[Ŭ#n-/G@7(䜀;auB\HLĔhQp&fMQP0~)Z'G4 7[W=.,1՚e\7N{[%on\XR:CӡLΓ '=xbW&̣mtQC5cl3p-9iX s(0S*鈺XnQ-r 2^d_čv(8R=j#NR`T.-FlHp%VyOh=5J$5=QCG3v\Ö Ժ1laA CH-Fd38eIyeR%']x1gt sZi fh,%nD\^Ɏ<>LȇB=5r&w"mrYb).C{fyjl꧘-r'3*"_Uf Q)^Hm<B&":.! &mQCAUS|߷+q'E_IKu<]X?|s%N_ o/~_-awj; Hqfp[pQvt|~,Ho?t7~dĈ7 Ǧ5lRw=em%@aSu9&@9h4}+ {Ghȇ!'JFQ:_FK,I[GRG+S5(FGtxMY5bs6^;?kb5#"F5>dD0cEf;p-$j7 ]@sۡm E5T{ѯV=ZVwon9lv̀MoS&5/o57QkFE57&bJ;JlP !Ǝ5>fD[uAFz_wc3 qטm~tB}]^6!kE]ven]sDSoG#Qq׈Yˉ~;pO6 fP]ö=Qned;X`_*>[;h&§_^nJO-_Ѧʆz7yAcMk|"У#SW=´ꈍ$qUW<o4c6X!ƪ-|r%)) -do`E  Ut^];"|ם&)W=)/3Z ^E %!Ɗd*&h,_[6X6ԻAE*kl^# )T{!*xh0/h0M/^(b5".k|T`Ht|),H3(+bdZy`i崯##1tT֕NM̹@p 6bDO[mbj#Q7] ̹Ѱ͌愨CEDqQ+ ֌ RO\IaIx؂\ErD8$g2o8HƐ|ގw"HƤv(8+Ujq_z"4jd#l#sOyĻ4RULgLjݘֈ{3/6IatݣH[ɆQErT|n+i~n~" [wjQٖlKKf-ѵ2z2m7RrJ/`D(*Cii+z0CFMw~߯{4B%uhN| 䊏軟ߐp3uŠo=sr0RUV+U֖{3Og6|(r:k3դCF=2l̲/zx5PI-jD%Uưq7dⱽW^D-Oi5p fs5FVژW(0Ov/WӣY* NVV3$uJ [G[|(#*퓰x3=ʗ:w(%߯ms*YCFS1]1E>rçG4 VQyְ.N9[bAh+Pȕu "ffϳj}TDɛK7:6C/8XdYOvFe\ V79`mO6(Z&pS VNG 6XE1';S} dNTgXs/|(ʎ;}{=%*HEZsR(QEIR7]:+3lG?"y>]P1g{jy1GJT#em cru>s&dI6Qh*'5w ӣ&.g 3N_YN2 nm8gm}J_綾>bx^oh6SF-C}xEVf%w<(؅M 4U;ϿjF@")9 4I[U%0p]-5\!xPC!wi"*I+qPYMW g]kC=k|DA.#GtxZ-QV_PUiVHDh(HN/ŸPn*{ܾapo1PHzĸ@WtI vV=Q2J065ީ9J+ *3zZfLb,TT͉V^Jܔ*DD=*,2#ޒ~jtqV_1Z=ފ)m/Ar}I Jv}I{w;lYzdŮ=P3pf8 ;pNy (_Q}ỵ*g(s(gߨL &I9 f.)\%ϑ}Na`|G8;|'.OrTG 'H }x"'s }=9d>i2:uhBC@oldH5Ɨ1V/Sy}e?*/ Ppc\+BwzKEnHkƴ83c(sIΆ#|gYY=fW-vq忀dµ0M6X-qO9RE BE yxǂX= zV;bw:YQ(;,Sv+pbq%Hwsx̤SNTrtx/wҤr"Q>8 A- dAJiSF&l1F;l~U@#}|+/DŽ-Շ;Lr7_x\(mTqOT \#sܣ/i '6:LH̝ 6TGiHUh<}1ڞ>$WtJ'$ 9sE>?!L + bMTx-dqgu,awpK/ d'r"p8E 9(1TdǾSi1W$4eo _(IE.;"'x Xγy9?}V'b9$~k WQ͏$[s:E5aϳbL m 4e~RX]&\K1Բ{6Uew WjQldm޵Q>2Ip*uP:zS=Cf73oѲr0up.HOȓ#,MrK[eQ)l!ʔw75$~q jJ 8mI{ K ]e;ÎՈ"+n.#CDŽSrQu [1I>=R:':"LUGVČ)\a*M[!x'yg qbQV}n:55sDwࡣ1Q΄$4`~sZXrC=īr= =E $q)2Sj#BvGl,A_RTĩ,둩: H}hM 6Mt$o]I>ZڞN ev[tnqp:.emS^͔|2zPHWP(O OMbk2ℳX:EiWխ8Q">gChrkvۋ,Zc?WPk5;1vj=N!GTjZ) Ɔ!X̱jݡ,3zQ (`,Gi\rreHQfE'#wF?(wئ[ulk]IT7.ݺr5 x7[f)onN$$sVxuz 7z;VocO=k,dVB!棊}o^3"Pu? ZPCo M@cՂz0Khf̘um&i ײ!kr!X"IbA rfe HXJpmx+]-50#7Cw1* pD u_,EIbZ okba5d)!|gC0'|;'?ߵ Vh`R4b0#n bIjpsʌ*ߎ.k/qyˋsH 'u$s8!li؆bFJ )#/ B碈4Wui‰a %& LpR,ԫ¿[50#7CwULE Cj#JS/A\Sdtbɬ-fw!. p!߈8õHۇ_.MGUQIoZ )-ƚk4񇜎h2> zl"mM 0:ƢU8*Tʇy{9^"u#HH)i֥1++jyb`G?Mb|.[K>GLjCG$& fbC#U W_C72JXAZO*T`bB@N6A{c7PY' T| _Ǖ$@lWL6h~}_OnO@$?#oG•e&Cn]mUG r 'g˟}5m_́E0柝rE0>1[z/|EQ}~f ~\O>/&+\eo*=<4@yy7{\~rx7ۺq v{Xb8B.ˆj>I*ogƱԬ\{TԚ1~4@e0W2Y'6 kw< Z.]Xj}FEʐ w0Vj+a$!K &aTf:e~nKഀ3qP4ճ|1/`|]V/(;߹YC)1U D$uaYUhL`*6A2XHz@o[#eQ6e*P&Ӓ70-˯hP,(2YvACG. S\T#P9|vHwU$*+|\+B1>7@熄1 cڗhK]1ևb.y(TB;gP;wя3>>.2_0^fUc8rT6h0UG\㽂G(h`ジ)U80QLbFMO/2G 9o{DTЛqm8ERM We\ʻa">x**W!J0K]P pwk5jdFRE*V\nRG34IEчUsXC9p݋4Eh_79[Č1/T^'dh5QMˆ [c[i"lL+ S5Vbܭ&G ÈpmL}dO8y0+WI&|]"0nK ps<"*53+1T6n@XDqɅTb Hx03Y}66XX, p7kyVNxjsb*k8լG9>pTpڋFY'o Zx|t*.;e|µvmW9F(!e^t D@ݙ58@$!>Cp1TEQ^O'P~ 1-VS;e3y$w.NHh.)30'Ϫ_ "з#T`\Eu<,q$:':C|Tq1 յz)밚.k2Jt4o*V% 5g-8,γ/;ڷ8kՎ}2E1da"(3"%8 Zʗۂ$:x:G vDmⓉOXEJdEBaa~HClhi&-“)JJ"I]VBQMD w=cOiC|dW\Im+Cqm>DIR[D cW%zD&牱 ecj}_1-і:;0HTJyeгR=#Ƴ>k*?Ln+B hBޝ18d=yw#"$yKDŗGޮ.rOI:'^ WeܺK`Mc"kg5kw'};*d &FInV6\bf K% Xd'?D0Hc ;)9IM Pc-C L*!0OdZM"=7'qLzK$rHx$7f $77-JdҠe̦AɘnGM九$kL‡ȌzQVAmj[60cLpFegwqpc ~eQw#)!ex8RT,u ih'NƁPLjvM|-A3]hCyT׋%z $cre͗,s@s<+ɾ-;2kyքHv6HRe^(ZK,BAaZ9%, oKT=薁:wժDKޚ):3CheViRVċրTVP$+wٿipPyGjGDv~ȇށ*bMdR(*Pt3rާ<_W@2csJK 's! tGQZlWXИ{׹mJƝ `m6ܧ UF:J">%e4doSFi$u5Կlj5a^MzL]To&t6mu#ΏJ1>o&w7\)(3zo}wYwkoSBaݦxDv﹗swsNݣ+El|,?ٞ+a DCRC)##U2T!(O)6)m~pιvyznt,c: u )bX.bS`<ab:!4e%W&rZb6 ;Ȳ;{!~V-u _l8h ~{# uv1$*>9;8*~GF*+ub0ߩ_[qϟ1~(4^;J+`3 ޗcݖ_uΧs[&+svLa<8,Lv|p֧3mDth%JO}6׏[i|ETRI.Z%JbK}1T4W(Czc*m X9ep&nt"jwC|I1VLNnb$I1cSNb]ë4%JjH[˛Ory{!9_Tl hmWuy/|/NbqVMIWfH+q ?fnL~*- &ŇkvC-Z? 􈞁2E| Tn2T=~h#Ξ6xv @z^ mp`M1"Ue`Bh2Un0M\? CBC0ך i(K ؀24(b_BYpQ : ~_ݎl^= :nGl[3"$*d6hA8Ζjl8oH^0SGE2%R= Jh+I93ea^K_1̪|"6$ӌ̙qlKvK[lc(y:_n6k ٌO_ǟ7g%j !2fA$Ȫ)SC \{7i{&&vr?_&6/îo \/CE_(o ͎?5$_{sچ5352>]%yoxXZ:&aHu,*)BV*潆}f~;/[uᣲ,4IXTj:an>o0(od7Rlb>Nf:<8zR0N~w?Oϊ͟⇙v޴ݟhڙJgٺ]x=Atfڗi?86[ ǭfoB` ߺvۣs Yźn.}C0nvvj=}N~3]',<|{I>}Z9duT~>W2\_~%pBûG"dY"X<)Ofs yc(m&߂ۺ^߽}K~S}goa_iN 'uXNKkyC#WJ 5v+t\M 4\LoH1B岇fY˼?xq&Hߚ+xU)JArV}-"_a/L'^j)H09-Nams0M\$ JA.LTbH+cs^>ùHG&}|_qZUR_Cbu>]T}83Fu-ڊܨ|<[W4XI*ב>Ay,?y:&:kT†zCc 㖍 Qe*=>ȂP =qkRy:U*.mէjFo1=v{y|/Bu3a-j5s!*b1sL#^ )'~# "!UPm~WII1-1rdAF+ JI  * YRTAya5jh-vC~bFkCQUdh(xJ1";@сڒo?8B -іz#QL;mEFKܨ3Г&7K<_^%Sџ)4#LXZ/˟o/6d2>}L>@-H_hHHV]PG랠ΤQhIRL:f1:")? R49O(7  WJUbS˫k#cB뽏, KsCUOW?~%^?yQ$r֚`( Kq_""0‘hZU"M?EKƠu1( 1Z HcFX-w0)ItD~|ΥtQcGjkW.ucbD[6FT< -)umZH UʃH5Ly]ǻ9tT<&O3}&tF\-l Dۦ0dħ*y4>ZYnkHN(bma҂"uj!*FhH)TQX *8_VrW C|5A`JL6/ikylu-/3v(F^h_3Wzk0g'e0Pq/,~w>;yUkSg{v\jً|p0BeU u`^"Y4%Cg?."ZdW_P5(j`Qw3^՚ƘXI`9=ETM nW j7UɫP򢌄A8FOTJHUTSܘtSm%nYbɾL d*Ni:A9 UET Yó7})#ZIۯrɎ}^,>zV\5#/K؄ ̘ ImO[ ɖ恰RC12 ]¿Gą.C2IuFv 0Nw. LBh9^T?nA>&%&Q63bOkގs8OʜG4c+0}M$7AOcәX4Sq/ʛ{ÀZQ Qߵ &bgs%7qPH|28YwYh,l,./r*ޏ ':yӀ̗Cq4s %Ż]kV(_snQu26{!{jH l )\q^!WqH[y`/ 'o2}YCe,ޑċS]_->(MUR epacd)_쯅gݖe[zZG=< #9Ŗ%Q-1=|M6 pb=-< ̓t,g advsMRf`(XJ[[UzxGd]8aHj<5!}HKet$1!B% P` ¨X5|~BusXho~fExa8g|%S҅4Z&挗S ^UTw髄PsI",TQΘgQQW78zxǜncI5u+"E8ѪƲDcb@=RX &4r`31(ã08Tpit>dxa8X~K_+WV~Sr#K0 qluIi208 iP?"7x;_|(^;45Aj B€h}  ]r8?~`JaV #VZѸ&öZePjHO.N\,Ϸ(V`Uhhd`JZYfE&XP雋Tx;ghIǞ5gM ˒WwY(GB9B`Ńv!SRmj*YQ_Uab^D91a"nF=< #_+FvʂKq?% !Ltã08*?8> ᴮ@ksfIAl_6< e vG66:|Eahjzuh Sp_ AeQИgB(x~aP5V: t0xePs)&$I#0rN1y]p~U-5SW~AiXLƀt8vK)2y%W$ã08Re xIHM Arc` R`?lxG<Ν\%ܣ^3ҩ{ b92BL91b[IaCbQ&)iHQͯzz@dcR6qc 0{&9Ju5pu>]J3.1,!Izndϧ NX]ٛE,u1 3%0¸N{T{k$jj VN `ƸRNKzxǜ/1Nxg `vcMs`'WQ1cPa z\)*Lخɟj?℁6-1iaB&:1S0c~Aq@2 dICIg #Yv: $xuz>I=[n$=C,`tEB<vB*R B5Qd'ٌ4|}PI#"gYfXG VSǑԆ!=qNu 6 |R<0<`Q$~GY ךϵ ]R︪tpbJzxGsޠ8LYF:,H⮠s K1z3b gՇvs6$hr3\W^O( 9Z4]lmjT2yIa2d1umPf1_(sXI1-VN7M{Bކ7h( MSW+u_7\ c$$FqAfJ& Y*@"Ll9oB)P)꺾R( N7tFפI'Ņ-GapH~png7 >pPyVt, BOa}q׮SCgcp?_-N[=< LNi)bE6L(A"xCs| $(ˆfl>뤇Gap2`!{C R~@[iĨ v[-Ѹ&ZUGapTUbCF_ =&hgyI"}UlP舫KH[.Ahy |ޮZ`f>tH_ krŕU.L b}$ ;V d{yejZ״J*ϣ0>Z "-cRXZ]:aW}:Z3oL]#"`r,]1I6FfBja Sə`L!]#Oc D|DKﳛpw.GLfR=҂85v@]si\=oߤxN{Ofo۫lmomnɓO<9B! VWqG@WqzK5uW)__ce|Xl> 2l><˃Ocd}Ct+߮~[tB/?i0zo۰yVxtuCUgFnn@GZ8=`o]kyfo[Qo]9%vN}X\ KP 0sTSA]ũSAMsMҿ&܀$u2J~wCj?-–s`wZl9pX\ f9G:}_>rx1xyxlҍj-l}nzHb*&w6r0JW򶿩M``Ζf9wa} j>;-|t!Oϡ5]/o~ꈅeECnivk?E-,aC|F0XTȷ6HѡbV()5yέ!`o&kn0v򏟅SX~am=MW Dvn༾{&[osR;qacԅܿI EqoAe>nv6pts ̩_O_]ofwo^o븂6r \eS-HzT|:K]YO["^n-6j7q5ѷۇW.mUvxV3qE<^f=We#հ>Z,8o4|iwKh>@`m&]6d^/;6 lMwx]O=hib^DڏC@ImX$T]`E["!*zVGFAd(Xb6xJD34B3S'%(;W?yrş_\y? HRx]$*KY=(dX|j&3>%P1xAIKהG@!e]%UvviV½ޣN۫ wRRF=O)^y󔇿W}XtdJcC>㹊B+w:P|A? D<[S#'@Y2*cD>'P1xMn#m|i) tt;QH:|({~\]ޕo/ gC|yU@)*v8f!(T;c >%P1xų/G@!w3 >!(ѥ̯čJcLo fX$O{~~G[?ٯ5\Ơg LTykf~1Q{I.)6Dg7խ Lf{BGf>dm$ /ޱUd݇"a-Mllzz<:%X wU g7 yH UyUefOJNX84C"r'F&ҁx*yň Mm dDջ&S/?׵LuesB.MPRȈ Qʰ>@'Dԓ**.z1cٹΣb[[D/ۂ6𒉃E,~vE#=~(h{2߂XqLJW~y8`IRϤ% )/D3%`Zچ<6rLՆdB>Ć.U4 @2iO1 r c3#0 (i|7FɴC!0:Ǒo!BfB@[@ȁ|d}ndzM[@ȁgA^5d|3a@iupQ,R뀄E bBnwn[Jn@#rs];cȷy`{pS"8HfJu- @MڴnmBBb,DY&To1V qiJo!fl=jw8S{y|ۖN !њH8i7krѰwE.H3H8Dtky=*^bCH0~z?qpݭ>?3M<`vDkfg$MX* -^6ة- SSOn[vn@ =*fq@Ȃ<N@\:@r=k! H)FϒESV"z ,l6r s* Ke)4ՀxZ om dA^ܜR )4. `ny¾!Bw96o!f|VMXub2ގh>4NjhUߚL}iq:ۻuٺ`ho}v0Z{{?.us立Eo8~y;6٫zǝ>ىvRb'^-[} uRZF U0n\O~w l\&!/MʣM'~G"0>z1IjD-䄅(!qGZXke3n6R$cb UDFZr;HfAߋI'i/],iaLp=eO/o7i8a<(ڳݐSs`2-$h :GEr0+3}̙:TWPe "mK*}ׇ=AǛŋKѺ*Z O5\QjjǤ&s~a H9*Ntao_!WѯXl:]pu)'d>: b_v۽#ܳdi@}f^ M>οuBCү~:`n&1^0mǵPxD׹]܀xN?ߌTSJNˬ1%u4&h&z7h:~{S7  $ G*=ۻIRQh!H5\|xⅷ/vg03jvÍݽlG&xt"N?~N #.=5`qknThŇYm`Fď,v'NJ?A{>iC}rp z'('O;/A(쳷jU+LK]`ѿ{j4._uQ#7w8@`~ט^ʸ]A=v*ZlTey=3ȑ ]eQ_.0%!Ee}㔧x,f79z*'ƀ=@`Xzf?pİQ3 RQJ1t%g%eD}Xʁذ-}bX<)OxS,3xP5LT+O{T]S@H$KN`"O4#?oov _/6텕}SƒbigTp8&[ Yi2^FL@ G$l~_-꒦>~]z=Z^# iTŧY^<(ˢZxi'\IWm%so gL 'Vq$jh⟆[h*l p'90)ejtmAt" єmR-Z0~&u_RFðfݨVӱ #&rHDK=k:&fӛ+\,z~a4u8̓H8IF{Rlh9b$U"<X}}gߦL@88}PnG3#1~cu9n<ˬ!8P4fleapBHOiD’: ^sXcXflgjvb䴺(?KCLƌ'i:SJb zgڟRCS@+I<5$Y;%]}{ƞ6+gFWG>7Z. Fי\+_:xWug'Nǧ24in?bbT# }r^IR-wpEILng">=* >DrO} P8Niva;Ї O'O;XLM A>`qELG믺JĊT\& |<ӏf~V/Dyd"k=~^ouzev<=:ph|]BRuf,C^t5uC*c'śS(ģe= Ŀ3YOMi #RR sF(m9>x Z|.{|&f|1.Ep.S2F GVZɄ :;81C,v.CѮڵQl}rK}Q]6M&NWrT)'wG0$`ӹ@T5!Z0T`}VgCX΁r˞ƅF2D/SD)kQcJ?栍:NXѱ31%\hzF R D8$E2XI= ⥵ќT-0/ZM+V4OM? "9.(y}4 CroJ ˴GAh&ғH_liYC6=?2*1Uc~S $938ʾHm g`I$/DW2Ijҁ5qR/̠oe4gb0zbS̴b̑!33>҈dXd&k9::`^&Yf0zQkΙG/߁xܾ=FYf( :EH6:YU@*wA[mM8Y #\ȼIf UFF>j/=mZsF$ܧE]{l4Ԧ0n24 B3OiJK) 0^j`8eT2/*Qk< ~A'LPњH8iibF(88]yoG* 34U! z$@2;P5EjIJw0}_u7$%6%JMvWw֫WE%d *]fe2;JLXmu} 9Ei: xhqMvhk?usyRe@flCY aC5Z&bE[S?mWhTۻ](1)vkZDW}AXf]-h4:UGb>^0L8t}YjuGNW4]U|5zH\}hGDG9Xoe6:m[F:5ձoVqAW*Bz3:XmŐ:& .8S+1NɮֻlwUZs(lmW,6RDCOUy~3^To:>x"pOԫ}? d)sI뻽]H`5+n_=:u[Z- ':me5f]HpN3 Wq]NMly7QIҶӱY ¬HJJ_U2:U Rx ,]bҹVt2Zʫ_6))&&VQ*B($<3cĜE NVFOi~:VHZ!T]HINZqj/Csg/Y' L-7t*;Td>CwEP6[b şԤ<4D5e;Xa0]?:qr ʕ#<8XK'3 W lGئx_o*&fHio#m 8 [*:T /JEvݟQ&8GX$şt%Óozϳ%3~t?m {vDŽlؽ8~/gkS Nk2*Lw?enyBk i%EtE+ܬ_RV~6+W\4^6:9@ uЎh~)xlz Dl"mM,"\HǒRi4h,eoy5 )!Ӧ\#V;AP54"&ݩ36(]kMGȾkω5ɭƭG"Rpxr.JE)_Ar#'ee5jKk*Ffd j0DVx(ʢ}gZokeI3*v2HW4T:G J\B:`^ekӁi]IՆ Xq;9iWa iU30SAbA3 ('@=r`iZZ t<+10BB20Pg($^2#YobA% 90g H\Jkuq19Es"m#4 J^2_B~F/1'ܥelJ q2iҵte*]{J^K2Zi+*BR&>p*Rhh#X#Zӽ2蠴C;?{>7:f)2eX\ zR U,"h yPCcbcFRI;vtX΁?>iG2ъ9Ͽ~seK'\4CၱkT Vb%J"s<ڍ%`lJCz  ;^s D *ەlld" A!?*;K|G j;дgu{~ΪKy;Mlij7eբڈkE4IWDO;)wJRn%IM|~4-on2sWoˮ-PtGf(x!TꣾeD:ɗZ ';\~x3`k4#:}5;@qndcުŔ2)aʳqI=̗-qaăӮTSimX'X뫢L~S嗺ʫ&ݎ񘷥I ѯ/_GS Kr/}yWlϔ罞IU%v=uaPgfL 9 n$vRrύ$.G /c=3[0Q;j1=p ]/R/g@VчFkfꀡ@sJ 9P40aI1AD1 D,'LS%=T|vHXGNb&y$QK7e)AxZ^jF3gd97ؓ5kg99wŚ/NG_,x&__M Oe=֬ztZZ)]h~Ef,;BKĘEffO ,;^,QRRpgߗ\C~}OFͨøq4kE bޢ|"|:E9BO1Tev x59keejE[ Wh1(bVA&E =`aHS3I/u/ t^= E-KkBQK YBe$'pZՔ0_G }f '%DߘBUoS[Q<LxxM/+Q e_ۧ(J\<.*NqhW5`Y~mn8-T5Ïsg8pY$Ӝϰ /5/vfgﲏ u@|ibM^C=2Ӓ)E 0Ç$ϣxM\E0M^D\}iR]Iir3|ȳ~`)np0sHg[ '1bePY<Èb*ALqXtƿ7ՉmS!Jl:2R oR00O'k kUQig:M󃝣MtGbʼn៓ZHjselǴGkkYW~633ϯ )wWXZpSEQUvaޚjjG*5 E'{XrX)X}⏂9Kۏn&6䤛%n(7.G?LA|hIKoG לLIGA藟}B.K9]`A'חR~=˟jk ^\E6k?Rgؑ*>ۓ>y䲝]sC2!seyky3vӧ+{-Ϗ+ Z:EaJ½Ɵ`?h?'gH,&K:VZYRHkMgDۥ^DqӮdjxh/sMc6Z~Ih;]Q?zG WkYz΀Ϯj??n[B!rFaY*cRb2SXy` ՞RE?mxժT!!6ӯ6֦<_Ka. o>0؋aj?]ڐ,)u9JU69${X*Zr׎a[~+VI+JvVIʯƳe"30a +cwgXrW/zha/fLG eZ30{-#chnD<Uq }~>Ͽ\qS,kvb%= B7Ҿ𥯧Wʯ-L4GH9lL4|ָVMff&G`sF-.ZOaE2Dm۱=,J{P4>N|u8J立zzfMt~:͊LPIaod78ލ3/תi{.nsNy>[{ ˚?RۭUg\ c\5] aiLw%!aRS!,#=Vt$!13աArfTSGدe2Lu%^MIpJы;|'3_Vk;rx5a;t f:-3}HNPukG!\[ХhWxhI'~;r:!AㅡPYYc3Q E#B}WkQ԰4=ALE_/]ʍı}l ,1wReIuuvzSynYU}f=%ހ2^It9E-|8HJw4eߐ55:$Q\,IwgJNM%\q$K#ZwyݤB&=gzgЪ%nlxEpb<ⰼ2[ˮB:9zW0빤}xQR\< *BW(-Ƿy8a:Z^cz7I5Bqo:#3$t7X_G` $f^DB爍Vq9𡓖(F"NRQCEpXcGNiz7w0?RqdYeFH2F_sz*%H_eo- ,; (/?}،3;ǫ-r8L0˖m|z)i@z^yiKL3`q2hbx32_$2`QV*~(>srSKQ _2(XNE1o~,Ez* $A䂹Y뮦cc(}nQlXKY$C`$gS4a?"t|#r&U)'ߧ߾.{ b-&e?.:3ZI3ȳATp缱E2#к6|kM Nq-#>  9,R+CGhSJ*qԀ N-pJ|6ev 6!T k dAAK%,'2T+BΓFa3 `hcaОF'"=5ڞT5eӿYYWɫsu ޾H_Sor Q@NˬTIuF|ECNh<ʼ/ ,Pӫ{[};ɜ$q;=XA2 s)q;<c1F13ɬ8(&MxRY67JlAG`6x^GnmV$ԉ<Օχ^QTY8{<U  ”b*؈S(jLk pB$T8 Va \L"%zI'[0njF?AuՉLLLɭ"$9^* (МF{ёz?Fq!mp&dz9*W)G &` h—Ste }/n{ ozq锕A8"K1m1L c.AQzLb -QPᘉol?d\}ho2%wƽG骡i#PQ&U/"5zOǾ:P_M67O?Al?w8f6wg=|zXr^wX=LNSij.-qs2 "ciq'vHz#nkF-.bSh?Ίևv]9ގan>(sUq⫺{)WZևtz道<*O'oMϒԝ"݄HvFSWM$?Wtvssek+>}  uxՖNI;%7z,6B^{VO6dmBW۰:]}Tf[Ƣc\g7i10 f1crH8jG?ýMV_$*8]) BSϘ8P=NhHbaNiEvOMܫq=VL@muζ>_&yqk@BU ln`$V[GbU `ְ]+`e[۴83 m5p'ŚY:rۍ V"o~Rjƀ1dش'08B0#NF"`b:IVSA #(H8HEa; ̭ql>p_*([}M\`;TxHO|a͠Y:mb;Me/ٻ8>O̚ vh5Zvhx?{WFr Kd 7A)&[)Q[zW12#XP1g&vkpܵ'wN-%ឡRcj3͟5(`02,f~3F*x|4 j-$*̦!D{-"Xa:0- 9k a}GYX)fe I9p>r ÞX2e{G<FHH5pFBb%#11)DXҠ8KFbRtas0Oo𪕜5jP1vY4a#0^ wE%Kd5X= S ەŪJդzc=pޚpNIߕ}0+>/aN(fFCp)ۼ:bE*ĵF"^0|WOBFf5E\])J߶bh2 `_MkqB?0}>Lp2H ߞ' j|yxaِϠo4 jm1W! fmG+NEpzw0K,M曓OT1 {fh4sJ7bi1g;JaظNr>3:w.SPs{dݤZF&ւ(YزCzE(Tf<w 33sMWGfo P'z0WN:JwMYGjװowO$;:mqFOu1,NIʭ{Y+0~20LX;nⓧ[Þ'8vOro B2i+ZVz4pλܧ^2ZPw:V8(-91>e# amOF<_up2%]Mt<; hn@DJNcR$Iq 3âҌ;$8"`XOs"( ܛ0L3h̊ͬ?K*Z8d 䥗VX J.$QLG!qVG^ 7!h ,VFm11NI7&8iU !N3SH2R[*(MYrA$7H眊\562FBKe:7(!eV}7l"/K/{?|2>22YŸ{eĢ ka>JOX/<Գ@S ×fM 5,1Q3Cafx%_E:3|υK}`K#i|i#;#Eb?*;IZb=oVZP)=[DOA'jB݄io%e`&w;Z>FIcŸ`_gFĔeP%l+7˚JT5+i3(b։$Y8# Խd~Db]HA)”eNjr E~ E-5!!2Fa#&$'pZ/#׾g*Otx(zljQߖIj+Oe=Z^:۲\P $g&EH+fөXg2t W_-ݛߙ1lJ$4 (nScA\8S! V+oxe o#vAp_z?-H8k>^mqvX}ڟ,G{mw{ry*]su2#~最ۥguM=x8XM .9*ۄju0NjP^*aVVT͚5>=zReB@ߙaong;KbgGІ?O+Q|kaChH!:GkD,O> GLDh8xl6\ٿO-Q >jȶQW = hRzK%9s"ŠSNN=9?ӿ?}?p1Q>؁)/ڦMbO1jh*uzvq 2̸ᚓb J?~a/]'{$J>bqfNM? 7_iU094yB4+e2M߮e-F6H%T*u$iuLJ ~EJ0/hYm5f[qQTx)shQ4L{K \u##m +JI!'@'m!hbEOG{kTgvlu/Rڭ!3sZK}TWumu$NqzYC*[]9/n1Հsc̱b:(pY[DH(򅖜N)bxЄ(8tF}{DyWSn}DRI"f/jGect8֣4C#A*I1cGT7aZ|tTD`RGTPJm6>)Dc2U(l:aݭ+."2Z]`rfkHHHKI*rv9b;Glrv9b;9b"sZ]˱k9v-Ǯصcr ǮBr'-bcvy.m_@u tï.%AF=iR;@j5OLyhb9X0 `,s`o'D`䞃s`vc9X0 fHHEG4 fs`FY9X0 `D iˍ9 *X0B`D$4ll-](%*Զm98-A"cPJ!͢FbX)'ck-eEZDm_\8*liFwBr{ #hj9 QEg ET$HWp3v¼5z{;uj*ddsV@;lY9~1x]D=ea/aRv4 cA3XRZe:k5ÄNDk4b%`& 8M(rneZGE1rbDbD f3+vl]tQ2ށ ;ғBz4V}®nilInZ9rCzTMBۻN~VL0^;D֟Zۻ;:ّ of!,eQSn'=/G J D$y"{F("0>mҴy:b@B+=\~J-FʲHڒxْD | >2<Hq}qp7! ͗'H_Ǔ.V`RDPex,A/B2T'єi j#J8g\KX>  9BHjeM4Bt ? # rH# be}01̡TH0<@e @f1l:{zMqx2N'ܗIeW-~wvh)Oogm){oK68k6ev RD$?@7b48hSvh5Z+BkSQAsglrD%UJS1spgq; ۊ.̟-eqfbҮWdH%a«1$3gWe!@2(ژ1G <  !+Y162l/ga7ArDe ?dhvrX5~8K ޓqdWx'-}0H2qJ:BSz0J|bR!8>jK ^Ȍ ^`A ƜuU~䮖u'uwzg ˇ- 'WzX/dzeT7 j-$* %^Fl0хT]ȀQ7a iU30AbFA3 ('@=r`iL;ؑǾ! #$$5pFBb%#11)DXҠs0I̥ .x|#9퐜9LaMg~f?6s?[q?) ĥV>L_Hgm6I :_ŗ,?tL_ڝXOg7F,6gk>.m\i{zs?EYeG9͓䛃&tAs\?afvl[a)MPQ4ޡZa:!3ydc̥CsБ'Vhbj ګ T6R6{v;uG@vmqASMQ=%)?JZ .8\g2Wo <5 |>_qo/xwt4Ӛ6VLU RPG87P-BgߎySTʤ$jR}s۲h=Ži[SiAY' N׮Wur뻃oʊ8y;rt.C,MuRi%hKIRI6GkYGMlN&xv .m>LGOTgKOuk6=yY3n )~5jX"6 ln|WWA\^YW p|aP? *v.HԄ .fF3kT a9^BVA+YVr͜ܤWpMo0zȔnz*b/+"UQb0X-yF9 Cmڻ7gCAZ_!cK;-((iOKZ5OXhʎ:m=vLB` yXuIB+eֿ{% !Ak!, bIhu:I6J64c ͊Sg"(R* ?هγW! X2ղuy?8 PmF&/OI=ewo jټ@RU,JԤn,[|vZa<;= tw@e~ 8KLmӅ20zeT.FOSHj/M^N:ه^h~WHBC)~ݴK,ۅ:ˆf8m !Ii3W'Q:•BJl }~pIbtwʪF qX6fPw ZMfKcg-` 9Zr;Afxްƒݓ>؅sc̱|:f͍&O=79m}p'w~w &uD94yraL k &5q kЎ[\q/,Iux˳^W 2Brag}،+RdٙKO3S lY΅Ȍ \16mydp;s6[y ߃/ˍjp?ܕؔ#rŒcÙʺf{r螼)5{ΟVn{i nǚ}, >#3NT)Ckz\kfZd@G2ΏQl(FDugtn !; kYMtqKc_k6S*8>)|;)7L{R$R6St#1THN$ (~ۊOtHnҹzߐt^r&Q)1sHRfFہy8e߼y}_ Hk|{Zu8q\F9`YLי8t [-`P%+]%Šwh#{5W JT$gJ뾩qvOg0Wo"a_aݡVi0P _N|u}JkH&9CsyZ'ӻ\?кJH@xn$q9\x7WES頔rV8(St am($h>]:֖Y Y׬H#D"%1$IfEwZI%pD0 8y9OcAPv Gã1+~lfmCj^E1LzJ@ DJ"\H"GQ5=jjGrkleԖD(qP'J H2.N%}YC63_.-pT\uKg #Ra'0!qY}F@o~z}~/wp| q=V]- KSZ,MPͺ-7>RɬL D믯MoRiۺC:s!U |WeDCoj.CE1wU( v3E I;;][~mdL3vI0!(^EӋ#gexPQC{֗\?}Q RDEI0e4Iewz u wEXIxU9{`V`;p ˭ .8m\4JL >tꦌ@M.f7 bd8a2X5ڼHW+WfZ{cP뮫͇踤V0jFyOԶok:eְb䜪얩+-~hZY/_{[{/I*(U#,[~VH[yO:Ӯ%{it^zn:ޕ烷O^Y-f^R꧛{nCpm&0_27᪬)Q7ߩvRfYo6E1Dy>>O8vw[J:7n|K㞔q?ϳ_O'.>InS6"և+Q0vSxTy9bԅ=mZۢ0EXY0"wPt} V03ĀGVhXaFbA¾(ӵ7?~xClsY04 aD$4Go>_ N܂i{gp9-A"cPJ!͢FbX)'ckge7 ׼M8Q7zm&.w]f+.9!]%WxGm|gz8_! R}3M$؃u}YC5M)"eY98T9I&q(J;lq3=5]U_UW%x8ucA*#Fat 'RPG/Awt4urt>eMKk+-gPL|[r.{ʹhS^m7 ijgϠfE Q BA +5W9XEtD5/c?$:,m<1YBPƹIB81 Lψ0s gJ*YrD;]*KgCW P6X]8v!ߵ3|b_ޞQB0Jg^;JeU10&(Es<(<`W_KeH3e=i^ O0pCctc !脧m;{d ZjBءڊAv=7:sz-֣CQ ~ &Ommob U'Z߀Th@Ti!R%-|YOf8aHWOTaBd0H <'(-8FMVV'1@$SB)wdBk*Phٴ ?[Ά{巻>s^]w뇭v#,6 *{33V {j.EȘpiX NEbmc) Fs@QZ:Z`oHN;kdD"4לYV[\.it^z-*۲6g=ۯҲ+cG_%z\gqL1 <("]옵ÆQ@c{?ϠR:PAya>0_V ]%,ZnS73?_;85;hbS٘Z-?4ۯ3QդMGV6(ަBg3bemήcvƝ4%\qiMDIiOg=Wjib'" Z'γd ZZ+ aP9,7"7mr_!Z橏Pok #"S [zuUv$<вyĖuuá٘$UCk^jB&d#sUb̗8eh^ q,tJ`U Jn)`߆)h׆5C6%;F7w xaG'T׻ubFUY'fo#9Y \*/Ib{D02W 0;P Ұ$iA6:S = )D%L{?.$GLmh؃Xѱc/HnjH8D$AKY\p^CQRţ,ŭsc6tlIhZѴiԴְ(E,G>2UkbO %]ۧDںb^JEu}8m;t 6JP %&2` ܙ!ND$4BP{mnG\ІHῼ3&H|2 Y]ۻyyvh]:4dyfR}H:ǒ1Tt\uLKޫ ]A=ʥ n,G %oۆ:=>%@5ȻKKHozqdmq߫xBu< p^xc1ĽbDcwR:9{fl4>qC5OK4>k^456SsQ`4MM/Ǟ[qoX_s]4 @'}=}%];={RI(5(+Sޖ'-FzyyPyp^lVIȭqEyTs!9R!4`gg٬0'P:67#OmAE ya?;$34>s:~}~8{PF=i_uכWJGhqph&A *bbbxU:zC$f"q̍e!~歹FJo_L?*/W[.H-~imb>нUV0&Z'7əhm$Qj ebDD!h9~n6*a Ga(,声|Qr_ct$)]h& D(4fL]l/4fM),4fcB3Q(f(4E) D(4fXB3Qh& D(4fL D(4fLB3Qh& D)oJ1R'ߨ-fA}=Ple+2=|;GX~ rt,3jH]Ou42QǸXꈖ4$OT#"[>;<0|j>rgMNL^<>`Uacф{“fj}ZFڡjZZD*TQ,V1_'drHm@{[$Cɷٱ7?.\~';G+Q$Z=)!,8w*z H <S┠!2r̄0'U`< 2FF˹24  rBlSm2H&y$]_kE.o2pS o/E _>ZfkkA;d0J5) Qru&j ³ F fKAXJYB\*߻ C3ˀ DFɃI2K&mL(UѢ"c/ʈ?lWMY龕uT&A :9Hmj.HTPNYXεP)D! h{/\F~< UPq`mؼhFΛ7"\Ѿ(RE̢;beYbM\7O0ZDtFht{ala}x. sN<;a:xіtK,s>)뜅t3aҥ9Rgj`I` |̅96@ށ1)+rr$;/ww2r{k֒D 87'#HtByt\<\* 7ܡ@lwQ\x⧉t~Ĺ:ӏus\ڔ}6}|Q%@|Df>Ix =\?nW\ gPj>ufh|vgtzN3u_#? /g|ߺ_$ͽdE8 녆w?m/!$ Kb, [~Y>0emPYٓ_{w~W=wed]\ve Dh> YFCN\^|ZSzB;EV /*ZqQsź^]to|P1,[_ @l1ݯnx\fͧ |6y}2lfYkSכoo}wsU]SWE{9Zy9C'4_aǗGQ{3*%&D#x XWUCȦc1,xܪ8bU|e0U]ԡ$6+ }e?[0]?k+ \xD/M_Ef;)Ākcf:)F>jY0\-!s$AqHWΎ4UC18x_2gM)(ʥV>NâKx,BuPZO܍[ލaq1U{;s XRoӣP|'mBמQ]?t"a1`BF7"޹**$TF qa`EM7"GPK^+mj^<%;k{@"H1]ĩnjbw ~^~geox?mt:F!v;՗ѻy__%.6eHRTZ/@,D*A{  BXqIZyH1im1{2uvbRK|[ets)Ɵ_([JtrO<@QWiBO9C $qQFkԒJJ~!qh`24L8$$)0D*($Mі-bF&{bbĻR9B HF%69k5I,,֝=wm)J1Mn {(wv#qW7ޜcxkwwqzsnVK_ͺktN*Nnڜ6fg Sxۙ!u;o]nr?čypخ^v޻m+.xgwSȑGiK/:~qή9*{a[McP8PO0|10/sa.xum 8_*Z/ @-iQ4:ǃrCqu).ŜR\d=S6j-MDz)<ɜ#ʋs2B8W-[4`@G d?ٖ ͵_:xO5D[f˻B#ALe3})_ ^;ޯ/]|-?LkLʈJK8_>XW+TYOf8nH=p(X!`@x@$SB)sp`25VHu@LEl b3ho&_oędf?4Bw|hrf%+%ѭrzY4﵊ xɈޠ(ksƟ$τFt"`oHN;kdD"4o:᭶&H]Ҟ*[T*4=n ~g=@iܫկ=nCyا'l+gy$aۜ{Hl;4v%2ׯ֡C9Vpbbeu(we F ۤc"MA\'r'O 9U}~I&kT:ɽYٵ6 Pk%&ll%rI:h|:66@RYOٕZDX@A$YbʨuDp%B2^1H@EyY"E4eF1.@KN)NlXwvht;Yv DUDU0Q+_U5VywP(_.Ao'x}g7#-~\]>Bɺ|$ZNV4?e|&f>g֙E Bg@,68s3v7hfxVQ`۵f+ΚոQ^v5Y4$-_%wM蝷Ihz0J+ s3_8_5ۏM 㦼eޝ+k%`S%3Y^}ͻ\El|ggx#&k3Jⷙq03F { u4ז q@4rsFO=z6aH\.KKVp8TCWcg*qD 8n1cX"=I,G"bp%Q |Hj-/ùI.8ŽNZF} EsQSN*N{ }qn*KgȒʩZUNʩZ9U+jTSrVNʩRqaȸ#C9իSrVNʩZ9U+z4ƕSk%ʩZ9U+jTSrVNʩZ9U+|s̐ʩZ]ʩZ9U+jTSrEǥu).ԥʩZ9U+jTbh~VYgTSrVNʩZ9U+jT=K)8WS:SrVNʩZ9U+jTSrVƕʩZ9U+jTrVNʩZ@eUǪUʩZ5jZTSrt{ZOͷB\R5VEm[hj%;29WZ pċWVHAoX䝛 [VPepB8Vfّ^VnSAb"&"";c$:'-uڕ^k-՝P^A4:^WtoQ"]ú\Id৻7;|<{\cEGGBGF Y>YxeRrս#D:"!ZhOCkQ8$9`w#lNnsJ:Fp& ~ m~A3]fkxzą n;yGin~ګOnnFY%~9/z?EYeUoޜ=1q۫+~fL=/rjf9D6(S47xNꪗ/fVŬy<\󬫆nЫGV/dĴAJL;y*թGMa3nNmKظ}=ardB=qWI-48ee`(B?}.q`ۺ0遟t![kx ܥ~c"yEܺGJb&짼sVt.&=mfOr%-;ed;QTTՏbMM67up[(pR 6;.*̓X'xfLFn,:Σ;Yf`ly(VOoC^sO`_:N'_¤<.g.)]WWp!ՅeDſD\O@yjk!X|KAzSA!7N9퍓@Nm$6!NJt]F={%v<6YIh[pT􈩁#0J8*N (#I Kœ``VQn.t ̈́RH#F˹24 w)0.֝=-*)Q}8vtpoс b~Ԛ b^˝+k|܇@[cʥ :x5ztNzYj!әAXJYvBd͠.:\!22OFH+hNeԊR-*>kIt TFq{}{Bє[Y4Jl#QѦ$Ah%\ BR1V?OoFNM/555 rGAT2 ׹y[>P>leޫ"U,{w/o`[Ÿo Hsڸ0̃-Lb\Y3cLbC`OQV9 3m5yw?8Ej`ýI`uج96ɽ3)+rv$3;nxs6|-~kI䍣8l̜uByt|TRn_*}d9Y>6( (~bu?śv%OUoV0V#>?`@aL11Jurcsͼy3.\3N?*8]vu0J|dٕ@ ِ.$!r@<|ϛn6fEo**G5 q4l`;y e^W`=:(ݚ߷#@lAqS릦 fɧ.&&f޻-m,k)@꼿tG>TzOt}l?tB$O׼ex\QCB4"ۼ,C6c˘-bRvʛfeXekx?mu:E.v٪׽ɻ?L}"9 A> ׽]ޜ15mBgT{Ts-P.O k"; $Rp]ؐ|ަjqׁeٯ/jZբ(k5۵k tz3]ƶEZ{BZ$ߢQIEiRQkTV^@,ZRh+htP{9eu-!YϔZK_MDz)<s!74F9!BMN.e 빱1$ݍM*$ <#:jĜS?SR.979QC ڌoQ,گ(<*r.S8 s[ *![VX-TGϭ?C3<˃z}|/ߧ OԾ͓)PBBZ~`]kg ?~2PtE&DF! IcuB22c4dl~}yp|+mL:4R[Pٴ&]Gb Wي+%8RKsj1"6/dEbQj}#D(-h07T$52ZXj#jk%ޫRHܢP Kpc䬗{qW `I̮=[jcOQz+ֳܓgiiݼgy/'6L+;].OQuhLHcmy}`~1Zit1ePCX5ܛw~8~TuZ󃋛7XbvݺJ(/HQ4NS.x뜡DB0*Q 7F8M4[ϺXIF+ :RH ȈD0+MN0γd ZZ+ aQw8 ,"pk8h>CQ 5PʃS,n"OJ;vsPt+1iYNu 4E׋{$+T(pUjFU !9G(:$Nsq,tJ`U Jn)`*5cc׌Q^tac(]؀Յ@(6͚Ήep})O7_ɀz^DE N8S$&"fNE&/}Um2ֳ/m#=\%(at\`RxFLQS Rpc)Umjs㾂6 mov6eǹL<;35dC)3]":X 1bhJ4Lx[a5ld =) ThEI)LpZ0L8}q6?rvYHZvPYG9c@s\G@fFd Tl"*TG#u` `A?ZZFbnycNQݬ٥sϳ+ejsQ]0:\8^a{r@\BC ]0uz*=U ,49+ͳE!+;_q:nu WVD c.rEm 'AQqZMHr.Qpɋld;jZ#^$#"ADE.RĠ4p0+f";ugr6jK WǗD(e_'Zo^S&]?Z_%y(8 FK5-::RH8:\6p媉Y"! (P ũ&i n)z=T78= qZ)fw?Z>Y]pQI*_"o+~g_|_:ӱb^$NWV-/>wŽ.GK}|6 >DeܴrsrksC 4PR<RIaJQ1xΒj=kBnodY@75sl&05eo"ZL'XP_Q+#%n%l gJh\ܳ`BBqje//z5av)]GԃLjӰ?Ȇm:.t[bƭ;$-o5}rԋ#vwhr=L^u&>'(WbaFȅg;&aj>淸WP45'bh=I+X)4c5S 5 4T{ DŽ̈́wzǜ2TgoVLT-;B9chkr MݺJWFIo@.4+c๢DR4rkg)xDwAHbH< &L왜|Bo>//^oc;N4 ^Wt%O!gMnSjT|ŏ@=Q&RE>c_@e8!SwсZ":ߺSwEP}zԟq&~ίб~.cu/?Ǜ΅-Utpe>WQyPB0V'&p=P-{ƿ.VnbÓ[(U O'L0#~,uO+/M8gwAOM1Kzӳrss"dqYgKۋgڙ@?y4~9._fU>: W0b~ZxÁճVY7kݳfͅXIHX <|jv݇;Z o*ZqS#ҭI};ܙ0njjk`jF|o2/ok+>ܲYv̲/MwsMYM[9)vN~dr[TLϟ+y(UF8P~wk}>"xX"}he0U RPNyӬ m ~R1C?k+ \xD/jC;:u0 )-o۩K 6hb+v^٬I h t^tSёJ?}\xɴJ6`b*Z;E .8"h &A5 Gmu5g;VKm~46jQ=>\蜟%|7UAg*>% L$#+_H2 !wJȪ0A$j:?u3aۼF?s7zXu8'{1eRBRpC CMT\y$76Є^Guu ~FeoWu/pcd,GwS9s6v#z(٤3 +u{s_ȫw+mز/I0iE1p$@!v<èbL IٖV/$EqiMۀeZ.V:d̮_QgTE2ߔSCo36,+ŋ쎴 fӠVm,K2{H$ﲋ^gƳz5]0{i'l]d~KXPg=^Y#<C'6fFZi~fٶT4b&sm-!Ug&QdZ:u$.pX2[Le@L ,eQɆ8"z\6F}Оd[՗c-DB)F!"1<|K=!޲9f pyiA`B<Dbj3jh;`ZF &Z'%崻ֺ0ͧTB|Wt|[ŭI.yv;uɀ6>;ό\Yna~c [cR⨍Ok&@AUcV^H-ҩFA"i w-)oޞBdT/x?jcv{gq#7T*r7&9U Q @A2D)3ln%9 ,]T<һ&t1E9DŽRn&x92J#"B\w9lNtؑX+/B"] מ)\˂ēg㠫w'_}7v_}r2KV'FC_1adL8glWqZ4j# 5ylFpnJsplFLA sn&1jcFV 芚4R=S`?>2(fc&*]Z`b r sw]m:T!L<- fSJ%XYbM3ښNDI{Ȱ/.1D9.,2/S..VAEv>]MϪHr2/8@ gᴼ5 11|\yj86{z51ogA36찠x\/ԪBbA8GX?@s xIYM aU )${&zV%~T]I=ɋq2`ngQؕNjʿXg՞n}5+o7܉Z7IZ0Jg/f݂Au/NnSӣHlzJvoܔ~š*]UGK _w/Ι(z:#eј+mL~yp*:#6Z~Bh$Ac}`"`)Bȩ N g6Ҕ0QaLHath) r9Un,F)kTj6 cڹmFa|ueřR'K)j_\4_{Gc^/~4S=D 0~,teqi8Z#r2':sÕF^1KOa qE9+άA-UkE P41`Ӕr,S^SA7/'I*e48v-6@Yd{ < 80f 3DJ%>|:9ZQuL;hO `S erp ~ȰcV9st28 D^*#4|+|*nc ?:aGJJ@]X}P߁vP>g:>kYո~ baK%>jvO|6fVy#=~t(>nOaR/1",D2""8"A #(H8H?:6ťNN`fe׃N{vyhS}fч9@GI ֧rj*&ex `:$`.Yg)ȲwVq8]5`=O[Bܱ{9&wB3;xYwFΏXR8\T&70X*Žݻ[.ts0>V*gV7KU)~̚ jTALy'I)&UaQv҂NhtkW1]U9xʳAQnpI k_^WoЛ^ZxF{ꭓXN2Jѩ2J oe>Zע>jbu]튖jW$ t0žb~ٸCCVPopIN$CvpsoY v0aܓa ԓ̗kEuf4~|ؚr 8#WFk" j!ՍqGFiɉ$r9F0/Qd$d>]&.<HitL*$Da* M%1"0x]7mNsBO)+>6F)1#lcy饕ƁRe ITd\cF~4pDXc+'ڤ}n&8iU:Of r"Hl˷lΛoo B顝 =[sI }W@0P& -,IVwg] s*wI ԅf!Ȃ(9n)LI5K0/Y..G9g\[E"1Fe̶ O_500߀RT,fwFTݝeFXKqr Dj:sm>{U,]X(A~3-~_^lja!%+[b|uK!YfE5aaeɇm=ѓyt^*A[uce+:kuv4B!a`򿙀YZ?g8[RŬ*0Q67q)o+vҦg ef3_ j~=+Gr(&sS˪ n0`ݺpQAYd1#Kw{/caFMưy(OucFfIdxPQC{Pwkfr*=.F6H%T*u$uLJ ~EJ9B"l=HmBտz[p# $(+$ &FHJil&T+P'u^5m>[Q'j5_j4jvVɷeT`\5<&Z+Z!,=8\#uh?Xan|*,TtÛ/ Y >]A67߇meo$v׽90~/RzB,;@bt&sc$:BW&Ξ7Ц6c0ݵNPNS{$anH7M&fy 2*ÛI[,!ݙl[E&-57XUĠ ۞;rn+&/G| q.Wmx|_FRyK%h8R3C燃UzoykpYizB5ݟZFfOV!I])ͬCa̭Gpav4w/kCsMjnAck.'}Y«*M5^ydM+/NJt-``)HJG-QFtp\wŜQLRnjALj t* #HBy J'֊sºAtdqvŐnV,{8!TζlRw/cI|DR̖ࣃGFoyኖIȽgg%/'YH˓kYhLTFF#~~*NDU܁ DPĊYG& sƘq 1UAUk602,1a.9s$ɚj ZE̮2&Ξ@# ޽I+ͨy&nm~gFh) o'W2K^n>׻+G^r* | Z0*2q#%aeUgpp\eo=:\搗hS-a {5 Z"M*2=XyWg ^<.@yS҅ų7;:e=:l<t̂8SB֑qz~5neLD#bdcC4~)vQ:B]#b%`9-7$Vb2 KYP)4U-&r;b\IL΋)ڜMk+LFe&n˸VӌgBc-|Sp_n8}m8̎~@K7s !fJ!!2Ԏ@O`DBo4 h% RHKlJMbpd{IKt)`lmOZ~8KTv58 ݠک`PBd@'n,r )80,!V0yL=J$*Đ rւLlML,h #(z*@:}]M%0EϵEE,b c刼1mhkR p \L>XH:ind!`$kMow3i'? ;MսF:G89f_&J]lHsXaDÂFyL%!wճ;|Βvf>Lov<6<`d26xX>EIM ($ׁgdJ(f+NZ׍̴RY)ܸ,J .@zxh*HFZ}4XU[l't/R͋"I/4oGaS⮮obKh?^^lXVZ⻨r-b StR uPAD:gF! CƠ;Aj|I&oA_X0,u$0FX$dc9 d@g fDB Ufݡ4NBVHq/ӠHԛiMV~ɺCgLbG=KJD65*5&o0LJLIi#lީQ|7ZK^!7eRL0nd1(d5>O}̞[VsZX[2֋5 q H/|lhO[C҅xgݫn5%Q~jV_kbr}sV@9T.>y:*~9ﷴV\3{( Vz4+}6-^e]铟?Nfo|Xg!Bk~j:ݏF)ICO-i6ݑ\-]ojFlnfI6*4i0lm`Ų@]&:ncmouMn+0eq9eHRV>BvL+ ~φcGV*lTNߴ`,nHϟㇿcȅ>F`B I_,?^&zb 's0`㦃?fElY ̩Ȭ*j~oL=nRHYa\`1eNHzɬeΊy牵hߑ禝Nq?C#yk4i7]9yeMH7̉3cwPUir{s*~twe^ruǫ.Xk0`ou+!g b/4Ľ0Xi1 )A@f2 ~X.ŃMI4 YlQkZ'U}䗫KtY:AZ˝hsqYd@_;5Q![*Bo}K^qzŏE?LnKr6KiULRѼLkt?ˬRa̍`aAt 6WUEZכO[_P־{7]{?zcky֑njCXnyw_l@n,ӶR`9JNll-74O{~bap HkG`O5P\噘| LfMkq6jt9ւTfL2yg=IuT@; ]7kmoo Ydu0~Hܻ7اqIuUՔ@ƣ <<մfW`5[zV ZJLx`2A#D\2xgU* yn ѯBJA:L$Rd\3- sW"dὡoaajG%n` Bz~|ԍ]fLy3dn}uI3zsG[V_;s$㴛ޭ:`BQWfP#7JO n]Fq):e=v6%!{  -xV5E#s"zH\q K^sº<);H__?dmn΂g쎪gU׋|/V)q,@h{եOv|NbwmTmKfZz{4JevVH%*9ۆQwW^D?Myv} [?~^iVGLv˛݇zB5ݟcIhV˼&Yn&o' |zuܼ&7{rsyΘlăg~vwEkYhXrIB;/wTpTqf"(bc9 1fE#g̨uU.@xi*Cʒ[QЙ"t`13WWgO>{@i徸>τx,n4nVa| Nepk"[5Dȋ/I哰]Ђ T!=:ͼ Kȓ3V{zt:Y!/hS-asBn DTdzjg۬&gy\{8lί9nc9ŔJn,ᘥeO}}@(s`$ K ڧʎP\.3ol]oGW]FCq*M$F?%)R!)R$!))R$r]S_UWWKoy ynVlp{ݺvι7M5++881ؔ`c9nü)$a!DBV7$5"jxTT.f/ qfAn$*0Jf#gQ &42j6>ٲ*\Jxj~^~ΫΪׇ QpJ+8JG:<u\"t22fJFPH1rK%bdF/cL%[3f#gf\R qV#W ')~ٲY>a~~욥'>4OptȜ(E e0ydgD(ȭSB\HSk %ImRd2+D{C0O.Dʽ.95sy,Z;w֞ݯmxL >0J!y3AJi*0Qϛ`Pb $)cшc[3kDiN#PV7$$5pFBb%#1S, A9N.a#ֱQŶW$W-a{;4_R+ۋ*K$;P"UT5ay▱(.3$XX2tie$<)bbeP]bՏ1cVy5d ZGK!z5j|sWL#,8Zb#]<]8sЀcZLO].\s4 ǏG3mUq7;Tjz!ԥoGgqFP2lP̈Z𼭯u;λGy*|4H9慉H8!D QE`^>g \_Ȟ[QGi\F=/ce5| >:y )b}(%J/'1GW_ixǥF1K)`)$]܁9g9st28 D(KLUcPy!Q/iպUiqN4Y6@zn3|gXk;tttg/ˤUbtQc}@&WXoq1ЇAze6I>*in_Ӳz_:9q>]@6=oz^U'sKۑkb~?ykI佚K=FetrsrNωmdY@-sW^۵"%т3wi(57ix\\V'$N0'Ku'-ruv`ⅅ!0#^^4L-,av)]GԳLjnܳ v[ܜL6K2ݜpiicrK!Nm&um m쾼@=3.3 _^;ṷSXK+Ԝg2'z -Q 4n̈́cJl4No7l<ųy'/T3։C_ۢf64J.P(5V''b< ['"Q@+:s|E|FϚȾ7s`r6KLp09x㵷 ^W%O!e?NRjOcVTeo<e/)ŨO2dٛ@і:T2ob$FSe,ǗeX췜$oI^vcCĖ8fc.ZgHƪeZ!Lyr u4D{R#,z̝vX-"r 3x*ơaFhNP 69d#Uzh6an_}r}3csϳ?ki.6^}i8}fΠuJ h]03 ë«ZaLgxub?ɰTŅ<4ysۯjIZފ8^U%߃(/gw ,ttC,ք\fe4ݕpFRÛ⫛dֿ ~9@OHvpLrugODR&?@ $+Q`jR Oa N9kDY#[ex8@8(jV$i&16>hyo?L;Lׇ$mb+)@J^x9㏽9d6A7O*e^bdW5ݸ9\irc"Z452UӡެoCw|%AF=iJQ | -uߞbKT*}hc+ŜI9)ZPS&_/)FI} Z=4@sNIP]s3)=ׯ<,f~9ƂnjD*Z8t 8Q>p! wOA4iȎ@) m.$c7O\Uaɡ\ҕ-pT\.m` =si-9D.äU(.)gZH'%q>OQq{qezV슼'yiᾸ1*if f|󇬈櫒3"ꄝMNwV?Ql0ꠧ|_,}uU<8{L xju6cE̵~Yfv.P6vRϻ2 CBu$Ɵm>/3*@ } +3f1Y?Ød1:*AG>dۨmWt68屄9?j=v +) bX'o?_w0Q.~߂$ԟw&;h^UaT &(S7ɸ4#X|~`fmfY+}oإZ}UOJ5zuSHI@f ?efK箹EE?*w+AlrycMxM˘xGڂK/m.F6H%T*u$uLJ >"%XYmks^::2e%ca'8m76\&xqMfk.:42ʶJI|,![%(blctVHErp&ӷElS +6(9} 5Lg+GVG8'vnlGyw-}Cn[3M9"+2K {M .ƄrL(e>9[U[-`>m{ξF~޵Nnf͟7=m~ngdҊNRV)nMjMݝ/luHcݬ乸;qqŃǻ܃St7.&VM}oyWTcv~ʫJOID5 7r?S̄% e&ɤ04 Vɱ}W]Rֻo:Ȗ 32 [t!ANPatPSo%`(}z<-ʪzr;+p{69 f;>sPڱ,ն^x?R7z>('G9}i4O$ߺ&OHF1qSB"yX`purѶԘ콍9S ɢ,J5TF63-wLO|1/k^.F2 0>N%Cްr ={yo(l@>L ʛF(. hYߗbb=h_}Vs^DtUYmOOl{)=t(_{Z׌qdGoĎ\Zݛ.{w߯{L 45 i.eDؖ^HCq^2ܩ4=?=?c>^=}i}>rtJ:Vz{?^QNo7{ϯ4:_zq|NOp6^v2^̵<~?z{Y{|igׇf F fo2΂>:}w>ciozws]n`)Rs-'ҰDu~9=st̷s{~ktWvہ'4`һFamB=)'i )˓^YG;$7>"?c[[mer^z`{a>֧t{v{~ҕlpZgeNVS# 4\ ݜwyŶ*e0eO].dKւu;sRsZO =i@Xlq؟v806n%eb2N!iy2]N-^8Sua}dibu*Nb onA[e!e7& JY=Q)k(+G4_VRҞ"_>Gk5Po]Oe*䐥`LJU(i8m0JpK]V#q`6a><嗋[{7gM-J66uMtbҺg%03V$f=*Wx?2g 6hx4Ȧ>kp V2㏅ۏȥA|>-MPn:]e‡?"vS~ >k⍝/o/ ۯՔ_&_K5n a\f~(EQZM)[ה0z1ǚ ͛1S5|~|n|VQPbҮѾLJVG YZ r΋(Bf'}>-RZ` n8= wss#8B7jj4|ӎ%}OG,>⟖G~W]`VGx˫l>GOGYDyG8^ە85};ވz1=-3{ 2'B8OG_O/@]: -po(CC[38ӟrĢmomN6^e|q|>83=mi4E?܎Fx3(wZdqލ􆔉~gha!'y1kM=6G3* PTHY1w32i8YͯavxX5g2}(:F!Ma>]VWַ9zYT6Z$I[Gip{:ǷO H>IqlK-V?;9+J\V(AV[Anr"h*<$ 2bB%kY%RL$=kQBr*1k-bm(bJEU+LERuؘ{Oxw)[FǕvVZ+>ֵ U^e`.xK:$K(Ukt &Y:MO5|K[mȖj5&u)a,N`RÒ" QCO´4=)\5pfƪfpj ЍNYZIʔPsQXWm@--aH=]b.,A됔m-`T6G#4) kR!TT2W]yz2EN:Z9TYd-,4‘ `}Yڨd:Prt9R}\ZV@sc Ro,b8$_UX va-!="4A"e$iyU!}p|R i. K/F# 蠾"(QkvsS<oŭ 1>$,:וɂN1X?<{ 3I9̖M?{qBSUul*NDp9d;߷z!57H'F,k{SSRJR$ }'o6Vx]{p[fuɸs'SaA[w56}m6@@F8i,wP܆2pi!/Fs 6Dlqb /룫h#Dꊸ&#(b,|YBGmhv)0ִcF<!z5xrI'#&cUmQ54Z,֨4b|!hN-@GhY[cvmΨNb&Ybm4$?y2Z#vDDhÕxcy7 s.CxgQ0 ÉTICXsBM-B:j͢:K4YDJ ׀,&ymJUΪrݶ2+{# b^a5-Ia.yxtYBeƐ%6hLlE;Hc^y[`tO"iٴ\"%P`ڢ+Z`8FL-Mö-~)w`)jLhuZTkH繨) 8SV  b31G0Y@_nn 2c huPbKh w0SA;G D' \ bF7ׂaݼI^!C|X$bʅs qrpl ƷӘG]u6g4+F,TmB%FJ҈Z0⓪``ͳ5uc oe~ aQЎ'#b hNiu)FndjRk;1֨T/Q˃>D 鈮4. Z`5cw \-/^ƛZAKvGZup6\ jx QXpnYɌB^3`|em_aGc|D8{~Z6ِ~f"K1avJI-bJIajhؓ*.0K؄p^t]DAx3!z crTvHjOlgW QU;%t<ץ5τm' BVNv-RCi>!pyuQ`x>kStj_묽G~;+'կ?u9Ii߽L'*Xr'7=-]dk"!1z)a?3A%7gCpͫ96,_wa{_[wZT#UTb:D=V7-۲RNo:rIhR`8'5yZvFFQhTC]5h#kf/]5+%w5+֬`Zܘ6 G:?aJDVnN6qVC=aeGlx/OIFpq9񧋫8dj6xțVŤIk,ds̫j.*(l:|J YC;P*W9[ |20PB{ʼ$}~?4uS:\Ż6lywN-xpL,)9*>vڥֲHtDW ]6֛vDFLw;$ cPelb FrKPcEqEޏ11WX+rz['(m=篋^Ed~:mVls>N$;@1J gK9>Z>9fJK|c ڴŠ}yZm;IEENr= 4S-oEC%,6v xOt!/p֥daw_*O;~Uze+[KcGלLnnT!# G' ,v1DѱGXT"{G$I"zyKƏSM;+,F2<3޸[nt:ç \QEM(G3gȾ^FgS=|>7M7G}#MPP_k|:A}=Zz$H2d\Mp[JZuj#mvҚ0(DD6:DZRY#s~hl7:`zuvIUGcaz['˖#Qwvފq.R5Х5 *\sЙܨ pG3g7VmˋgM⧭^4\uv˽~v6n ˢ7I%8a_jɓ=00 eO*ȭg]lu nw}ajx%:pu_vuzQ\]Mo+;\ unOx͇=rp1m[6Aq4=lZww[sPϦw7N'SXſt͗-ݾ)Uhz囙x͍kVQVg >?愅[Nogy> FޡE髑+q|@PP2{3תCIfozғₒٿdPm>Wfi!Ǜ}Z2fjڏѡ;9Ǝ˦/;`^-s1B1jJ*mW@f }V`Lʵpq1qMoZ_r~2 kbg=K:hXYɽۮ?|%JgTb9:5њ>%k#znUɯn>n,mtq++x?,-nqkpJdqyƏhuqSW~"c k4?ݭ<uMl0}~pztvUvlնm O3E3&Fzy:GG-~<#8ol]Ҫtֹ(m$G;ǴsBwS&א 9RRلL2:+&bJBP[1W\K;8J ЭSA$Ӑl'&t)fظ]w>w݇~A~fLO ީNYSڙ$比C$?I~|SxK] Cld1g^u&:N] ~~aoWN_cᱺ$\^;YyfVg}. _=nK[S,6Y>OdX{M##o;NOxh@ω PR@+cd2M-s`4`p=3_ȝ̄ gAJmӏJXa!=2x?xoC^ /Nf$?$f\!Q9_GOo{JbYӅWA keA$W%LV.'+TU2j[^0o"7էoD⩨qc9kh-si w4sv?eSh˞> 80e >I?%9P =7$ptxM_Ǯ&lU)WB*eBV!bX¸Mw,F>-IJn.cR4) TѨl϶"c'|QT*K֍79=vMJpRhEx&U+BE­ue$oR(*{ N0PjK\!B2 OKX\X2 #|>UgT͜ox}="#rj뼐.TE^#Vb[x\ xȌ%a\4A$X.U6?sLNS gښȍ_a$#Ҹm>lT֩n*/rh@bL IYV~i II4$Miֲ8CӗH#K)9;4mY?/,uL{ոd_(+E^/_ybГ8X=x7 Ysf>?h Kml: a~xr㼵 SMH7A/xw}η4%`R, J&:hXƍCIpBRۺ4Z Z\b,C"dC9̢1[Nn0`CZlRO|]4'nk*kТrYtgۓo./k88 &&G[S+.js5 5pය374~vkR|Tt0S1 ^5 6.rl(+7.go=‹}:c^nRnJ0#"r9hY))$pQ(jM1<λfw-;J:JѢ;w즨4[NƟ:gYūz<lw/Hf/``>{ 퐫wo,XPPo 2 SKjjbJE&6l:Nd%k[C^tl{kcr{M.6d=p1)NkZ+ذ03‹47P4'zl˘IW z!hyIZQ2:hyvޒr7݃ r1[]$8&Z_;uc]Nz/ZʆaWe E o {߀\wWi!烒y6pm){m*mr: BQTi:o,vZ߿j<˛v,GtdȄ66tM?bnh]0mW ǫoWf:^] ZJ_ErKA+3?1M1m $ZoDlXPud2#vC&N;I`ǖ8)_Iw]Icl4fg]n?+%텶HXF@p&HH%$hfʨJ֠%sQ*+1ܧą`( ER%$!81]29;jVPӻB06dZigi޴~e-8(;ޥ4f9x-k#k2wCRz)TK<%<>t83ʳ;hb$:}{PWL@N%sc BMS7ܐD$O@~hs}< UE!o&uRhs`IH 4HX3)2F g7a8LNv2)&7` =(|96xNYrgܒ;P %gWh)g~6Kat{6fDflz\Zrd?,~(|B>X{݃e~mS{SbSSKy8Σbe_Eh_"㻥3itLk=.LczF+ʎ;~ő-Fjf6 'l8^\~?O??O~\ï?@V0uIP/ /֚[M--0`5z+k>rǼOǸ{6kaI?/Y,q:9X*A~\}&1\ q9DK_S%+%*t XN(pFiK|Y;AcVT3yYy;ENW`=]]cgU $2)dJ s~g?O&W&/o۩uYa\ "Y %#+\ Eީ{ݑ~nZQ1yB9"cA}rM)(kbaN>xUĞ r3_%GQGjuwjK3zx+nɚxgwu ؔҠ B3^jWsh!4Y$рd8 µNW}Kګule8B,Ql/zש~>n_z(A{RŃ=~]{u=V4MEt_71hl0X6-r軏z׫_?'7RhnE;^Bü }S/ft;z/jKftFSC7TRIoH.ӼNԈgAq5RЀwqsSr}(L,#}taUQVl׃G+g4zhl{FHKI+ -i{K}:mV-Tݷwe&r Gb@\sy~9U7Cn 2\!&z.')cXN5Ir6Eb e@x!`"Ι+=Ff]:S ~6S{?>/>+rY~,rz;X3}#Tkguu٢b xЙQ8imiVD0BSaUc,ϣ챕kTf/ZPq0>Y}gV2LDRNlbNnT ),+L6e:C* yஔ"C vjl% ݬi#$IO5Ь)+].c_.j|[H\=|#ia&>^YysrBƧT}̓kЀљ\$ч>֤={O|VA€ʐ6`T_ t&m[$̬'*j8 >6Wۯ|}؆n-Z2b{vgWݪ9| u{m=/8*$a=6:bEIjԌ|#!A-W9NVs$TֻFaj $Tz|e Fv ~g}rp.EkW㎣ZZGrv*04<&Љc-C1EOƄ,& +O>J%*!2@E L,jQzhaDmV"9Gٮ[KGX4b5W#׈1NH3whm0d9ph$!; cQhvu!2m|@r \LK?ӤH#K2eX"ѫgRiee(zM@T"g_fҜ<($\{TFG>{ԋǢqDZC>< +vX<]6rG ~qm" r{T\)R.;+DpȖۦ6u|,Jѽ4.R{Ƈpz$̚;.Q; MYڞqۓַ|g{<)9n%8h&rE ܓV-ͦrz,2#::' D]D2d-#G30 ('v^M-rv8_v =Vlo,[rh\k$ Ε4p+3IP?{Ƒ/d_ ]d!U1E*J8T /""i]SU]U]UpR"B`cV_DM[CKL#2 24,ȸ%OAj[ n X\GdT५h\*jUDj͔:25 D4VJ>%[UyqwntT#`mM > jÁ &7 9W[r/FMʥX̵Q*g er+gI1ioI?shCR<=@J8=rt28 D^*$-YhӽV;09-EtL4 i9i }:Ct=c)>a뱯?/W֮͝CY9с#.k:'~-|GzD)#ĵF"3|ޢX!ϪzI4*q7H q#{/ջhg}mVc0hE>]V-I،٧׵Kҙ8bףqՑul.\kMO%or$fUeܴrsr˩p3"ΧR KI<2DpC@J,s$#MM[YIo;]dRwKJ&тI°"xTv|neid sn%t\<`BB2DtjbI[:6m6wzaZGnۄK2]7<ȋF;&-o5Cr -HJ*cWC h`+.3 1{Ľz;3WVŚ>ocXY-T73I58]yխֳIgo&u)+DU0Fi{왻zGNfh:imz{h. oP)+hO' N><7 ['ze8{+:s|E|Fo4}ol"xats},gƩ׻il:z]ue Bʼzݦ*^r1-ZpbWI)^){[pOo &bd/(zQ&^3ŇKх~WN N/Ͽf7ð# ?F-3ݨS䢤L Q2؝A rVnsLn!oJ~C 9A2Ђ6R³$n:m=V$|gt3=7*WfhҌj/'᪗ 1 '^Ymd{ӥ E ^[3eL ڈhsAʙ71Wa,c{)h5eitt!qC;_āh Z [Qh9"SL#쭥*”G +gj~QGC*,4‭iU"bK(yfo)ai$yGsg|-83xAF2 `_HP:-~x:Q)Ǟ7`FL^ȽN~`AВPѾ'6N'2O[[,rC4z ot:{t+ȋu<}Yc8+'R+i`#Òb(@c)XNJz;{>oq.r2ףQ0hMd* ESqGFiɉ$r9F0/QdgکB]}Ց FDJNcRI2*T2 L' kQXyPEgZX7aFRzJL[E@zn@^ziqTq}Bn ܹM8"KQ[fgmRZAQ*tu+H 6YDù,"yD+ r`'\7;-pijM` s)b"a !a]}(5sjġ6.|0EϋA2A}I{ F"")[p'SHG)< L\΅b:(•uN~6W  ފNPkrTarjBTLvAw7^b:f;Z5*Q Z=xF]6WQt> ];8TGt*\ѩۯg{} ?}8woo?cwa (8:Hxg~ۍǛ1jjho<4Ul3f]/&&d#J&;k)@~~U qBqy{fuq̏d_UL"su%U_b2U%ʗ*D,.6=*)+}y[qssmdL3);Mp|86F_@y vϗ}%`?+à z#(w`~4X?4O߀U3S JTW)JT.(Ih&X`)#rdE֩{X?票y#f(G(+R`&FHJilvC^Oly6|jE}֦X|[aݱU]sч%bRZQksc̱|45Q^M{ɱ6$Qs-9˝RZM]ʫ-OyY*=:\j#u𤨧)):n=Ι3(A{Ȥ-͏anzT6dAR>rڥ Jke߆E.|wqW_w{s|I?)AL <m@ |u|P3XTaF J[@&V ¤פ|vj:Rjsfȵ1O!&zGmQ2[kV5?jVjk Ol[K@YZs/僟? [!rFAo `hn)1ȄFo]9hNҲ5뀉B$2 , Ol$,rK0Lsn82nR8 cS>DGM6y96Փ=fW /u11*[14P`Da# R+/ZR4F`][Ԇ޾l%衳M*Y+Ok9+vνym7\iߌ^vG{`]0g>oLAd̉R4gKrY[fkYL,aց\ѩ'j9vu2¬?Cu,lM%&*Jc+5#X%Ezet 8:6(E>ᰥZh0&1<|K=!޲9fZ8,ޣ !HR@Fzg HDk1MFSѰ9^!t1E- ʬ%RfBwӪX1~kK"s7pMߪxqYyw]욮]xףϺK+I+}yw06o[_#*:@u1kUK yպ؛d^J/]nW-~(?yj~+?]_0 xͰw5{nwln?᪅Y{x㦛J,/ickQ[[ 7!g=6xP"R|BD'JLJu*D>@6lBT$RW@0!dU"SQWZI]]%*h3TW |3&S6Ԭxgٛk$' #`fqzh?d|C6) ssL0pTBoo?oPߌ d0}o_'al-; i]<at6{"|jAiZ1E&XBpf@6懬ܽ1~@-ߨ4~=WmD 11*HTl4|l .U"XQW@.FTUcWWJZu Օ⴬D`'dUVcWWJݪ種$VJE8?J;d|V3fIW\azeD0E.W)?;{#eј+mL^1͕rFh/lHbYG32t+IGʧs,TneXb1GX`'DYC.kIWͭ?20\[rĵF"3|RgovHjD".b!<ދ܂RSMeBe R6J=9>yRk',gHrdybBD_ϴѦoE^ӎOԹ?X~`+Zr`pN1GPbᰤ" "PƘ@z"=SWh,"(Af5AS.[#^֌{6Kii|0K+wEfug\)=%&ecy?04@*.\H"-ӡ}5SƪǷԈ mS讱w8N8߸*28^%c8֍7ݞ+K t#D.(|)!a]}K hmjġ6.|e3gȣ}BEVLv)"kԹRL]EN15x =?{FjчYa'6b< KcZR(@-U/V9VI>Jd"ǐѸ$懧+ש;"Q>O?, S sS|Kf]}Zon/'*dK"pu%.s$}?e@rׇ[OZu\ğ^.|\gP(u7ovmK> #t aGGH_ƆQìIaEX )_m'8|_͘g_ή6=Qrbc^zude8(H7MlBgzH30"v^k./ =I'vWzV7[by c=IՎ=;km06%:ڃ'EunwMw@F;SmhG\iTT#&«)uGO^Pg{y,C s7t|:|2T.P()þ=(H?Fퟫ=(n#މňζ Q(}4Q&DuNi:MT4Q&DuNi+ b {.tq@aq B9-{C=rFZţg?>䭍%;B(<K*8ɑ3BJ>r dL0*蛆A6!ғsZF+at("JLI@ͦ=o{|sj 7Bޓ˫*=!Lss a(!A!QgID=yrTB^$= D5}TؒZA&3(K(IHHkrI C )0H IyDW-lDQڼuZc`l:ہ]+t4=Gi3v 6q}yٽ=%Z99y vs ciWt>;!̤PG/ nH's\ԫzuWr!#ZH@6a>A1 "h/sI _F%Q`zeBU I ֔4Dsx*}! Sʛ!Y7,9̏5l}ݓG21eAh_B셔~vySy ^rh ʫ% y!$,kOcVkAcEKCL-lM r@%Ce!o ?Mgi i^O'ӳ$?al\؇$DCzufm%ѭr* k^Y)@>Y%&Ϣ(B>VplҐޡ4)c6YYfoRK2L k du`bkplq_ZiInqh!a2\H ֿJatЇ0$=`"sl̹/q8:'ΥZ )KɒMl&4H'IJ#);RJ6 w7ǔg6 Cw5YE,u¨`X& (XD01! Hm!TnJ*-k@)7F9f"+ShƹFC9qيl:53&9ӻ[F* .VU ֖t[vX-l&O Djv"8;rᗿ`ŷ[R\HT\@^z6P63 Qb`MXt5qЩQDRGBǨfS[&D 'W8 ޖH[v/i-<L:j_H&9pL ޚ챙]L}rZ~P?|SlHS2}!K($h22l7Ez_^=jGz d![pcN-R0Z4/2t)@фF{!S/qoGuwO=X,~RwGY 7Pww W貝t6~_LaӉjD9d-q:(C2)UM,`n-7E]i<yfT(!eU rsLG1jA@)hl—+*89^+ZuXӋ$O OFfυ{N!0 (!Sҧ蜌0v ƣypD gJM m0g*HG lUgXbiܐLOHiW| G]") (L:@Ma} @\aqy]Jo\.{aa%t'IC`W% h!פ?:TUuYP?4&>Uq of?ZeEӫSp)~+d5n"/( Ft 2:ѮMO"LHۘ4=Z]ycٜsFUK_K+;n۟*̚-z(2Tk"Hu;{s $JLI@ZMu]A*H\t]ۍTeW+6ߦ*`C_<:zy1޾/)5;SF ' F߁^.nX5|F~\rhC3+|L e*^u7{3<n p4^ L3x ǃriMGoNP ^ʠAvT{[/A}.tq@aqv/ V z}/>lٴT[Kv"dQx(YZ!%92&MF[6qZP.0"[9ff<ˋVcI$0$YWeb5 ">y$뼭b*\@]XtwmH~N"Y4,vYLHoIe xvZ~{oV7zw7yJA~ ó᠘8\Y2q*3d8:eвn\:χ˔ik_W:wlՖܯzAkBȺ6km͢N5MOw͛O6qNڰ;D$ .5(B/>Ƞ?!ou,toW ,ol [;ͮ{-߼\#x›a4hot}(7 oV&v5 GΆ^bs֜ ꦅ˖;o[ZX'/'hk C-mL9]ݿmؙn -ɞdn:G&{ (hH6jd&;8ɾ%-Mvœ:2:F;fP Ply ȹb\ `ɾGmgm~q.0Ux)+A)DD0# c"խX\d(wo19&!D})nk^PӇQ{Fji=wڇj|cҫ;6u\GL*H J)X9(2,Y 0`6eAo9S5E+^r, I*"1gp&E͵tWR9둱R qVXH;`[9/ nx=im'{qD/rLqKK]Ȝd>9 $)Q!!c]&Rؔ!ad1'69̂ڞY~8 &澠v5 Ԟw@U΅c1,r )X\2gY$.P0y|Uh)Zkcf1;9YA0H,I\su;TlFUYS<qǾwaݻ_5rs\w~|&G˥61Lo?Tq4~O QIOK3bz1/_8j~C8W9zOՍ4%l8~CO?/QzV-_Mg͵ިd)f\'Gi^ɏ-:-iᒨb4w{B}} "+3,u{Ëќ6IX1LU;c/IvCfWw(Yyܗ-BTntKŠ);~]Ϛ /Li, U(fF$t\A2K[f2I+z@;]ן?JvS5ȼ29`=v]ewk`x5Xbx_q_VM+6ַ`m)br][]p<.j2~,`v&AtVoV`>ey|] >MRs z'ƙ&(H p 4#'e4>v=77g(u.v!@.3 ,1$-/ Ai1'dN$nn!kȞCSXQbܔӕէCڇ'!4A|&S +ԎJ ܕ~hXuNz fH)}ƺR_iqv@CXҤ 6hex> NJ7b5I;0BïýL'W%Fxf.$5 (JVS͒>bR @3mµpqss~oKqXyGwM5G%V,MI9lzȪ!/SC%ZqX]vd{4߬9P @oNU/\v7x*eܫ֊ת2ZL L07 L [<VKh3yA9|{ޤJ3(FzTZͳz1w5lޤv~NdӥlD%QB9\HBn*!rl&tҦ~3>4M;Os?>\UEmEhLƋʍvLjBto=V1r90AYǬK& sڷE'xd^Ӻe$T%G2ULh12:-P S W+0I8q`i6u'7BlEWBōtFPO-w\(CjKB(ha 1J5-ͼ 6d jrU ѓNV4)t% U'37hHzjFz ~Pzr\ًϪm.z]qpz&G Uu˲~5-BekYj o+ ܣM"~eK ~]T Qdw=-?sخzrр,MjDT eIke=w3@D}HlFxMzlԎl.c=Q"g94!hIV$kAE9bl·p{L(\/A7+qLYiNoV.VB,Mobtɒ>2kk$ k˹'L)Ä= >JWUVA*$Аu,J0j$hxtKI(kHYXevjlxlr_#IK m! БtB93(/2Q*3Op')zaI/p(} t:3df鐘S}]$/< c- f 4ٹlE.ȍVrkj_a7ګ*TR6RJAp%5F`هћ$ǒ_dldD\A{Y{ǁrq2m8 :>qBSPBǕ.N.|,K󋾛)%?nS ;_~I|;Kgi4~G@H]n=Jjg~0w?hz%;4J} g8rcbTkP*19&:^}rOGl " ςԢrhbR, T%hAƍ #%%,&Bz-qGʩOQ32usCkEz. ]1P'hVx ms wZ쬸zڣxuV/4b.snt8/(u!Xغy+%&j93غP$ px9hCl7vKeLY44:;ՀLV(9-v{ v"G3I9m]R٨oڃB9"T<%)!)ʯgCP"afwuOuSGW e4'b&j59˭H*Edc2Jy&WEkEO+y^THaR2|)h41 .$8EP9&'/$B­Ił(9N^ !YelEͤOitDQ9u7,94*S+ƛ+Nң2_18LuֆX+};Ou>LRݦw'ӔaaȌW&> 6PTrޱY66 Lw O߭YvEm ۄ6ocWle݆Ѽ,\#&W//땛{nK.ȇ81222«;M4q BFdS"U RcsllȺ׀ Hç">b'n`;Vh̙x6cj#nogBrC2%ih^#R/sO8Nn{8rEMr{>ވ_3V5 v|8!*1,O2%ye g^є~K/s)1&iW)싪]n-ǮeQ12BrO 6ZQRxJ=CEC\B9ɳУcJL(e2=R5^ =ٽN6MQ|f+uE=A'ŰSY%\^өDۯw|wx~x?~xA8ǟS/`8[G{{pouSCxӡfY ˸)QffB@z~;MJOv:;C'L?ҩJU㎯(*U<<_RhDO?foܣb7HXq1t[۝,iJ-bRvN0z` n ~1m?k+ \xD߭5֏`w_U鷫%\ &(WGY< ƙ G%与$3Z#S\[c1H< gL;D)%VG)bXt A['T#NO<6o6S+zH瓞U ]ƀm[eg>.q׏gU=+Z6!$HPLF^=MPz"+P%Z3ĵ}*{y*=:Xʂ#AQ#c9K ,lmv*8~w @w!q*$U&SW*URV\IqhP[HhB~,5+u$,,za `GtW:vh0\Mϱ((ܡ= p&sP~A9?o&^u٠d|q[qTRkћ3ܴ +EUN]er>q*~*S)[țWLs6?9-hHRdԽGF7sw;c2)>_w|[ll: 74*eE8nS$\ !k }k$?XRp˻[|k h5&dH]D Ǚ-$%L"JF6huՅ+O̚/A7RKT}$sYT*(A(_l CHڴ2q48Tb[M`Q2zKj_?{ O|Ek'DWi268g&ky'n2+I&۷lJmNx05;t8;o|7"5~gV`cvK){׃Wmv&r#ifms׹z|d?Jޢ l]?.֛쏷t:k$jm^ͲClY`ˢݻݴz7yB0 67}|λi3S:̦6JP %&2` ܙ!N HǽG̺XvZA2b$Sm1Bnz/D7BBr^ts3 8_* / @-)Q4:ǃrCZWL9RW\d=S6j-MDz)< \E9!h<+BH6vAŒ~ OvH9.rs[/CZl]!ࣅ/1DH=!+%˟ O'SX%2"ֵZcp0FɗX=QQB#0M6Y) R &N &TfV&JS,h5TD[̦a9;Fp4zJ'.'[`|rP\Um݊ |qw+Eji_NeȘpiX "6GI( 4 ig(V1VG#$K3W^/M6#g~g} rdW=nCyXSf2% ;C؁P)4W2Vquh@LHcmy}`~1Zit1 [T(JHBDZW(\PT+&ATIKDqJci4}d>,m`n;9jOP ZDX@#Y<ɨuᏉXD")-lӜ:}+À9̂ZyS59k䂠V]}'3xIs-=*@62Vg72*հf ( :^f!-JN.'qkr4~0}'/s6Q3!lKG}ļvDr@hWv:$H쑷dBR4$#\2|;F$!8#fDWK݈Fôb jWӎ@:NE:>$kY*+1:=,ّ +P$Iy 9PoQ!IȢ(lD 6"9h|=j֩_/x(XM?2";DJ'iHl1s {E\2ڱGk/gcQhfr!2m|cT p \p&-F͍D#yC,+#b5qnQU"!uVӒ⢬i<!j*of){9yQ0@XyyϽS*'G>;\a58<i6k[\DяJQy6 I\sI_~ K5&wՃۙ x4นϛN%2!$.n0e] 9j!c!Yi.(*4?;0k僵dW")+8Q`lL{"odC?]YdFRqҸ@YN2# g`4Ij$xSĹEbR6G~\:'xWf"I?ғ68nJS|7w%vVĜSH\,p L8WTXɂw1L hUmBќ,I0@TJ:PΐKq<'R5jH,ERh !]`=DcʓAAHR9 11RlR+J4 PNK^ȼ26Dn "XaJu#01+g'07>8ӀK{Urf#R;)(mJAe K18fu9O>H?C*qk11:wܱim]-A^(+ EsY>l5Ơ>/e9hJgJTDt0nT1s0b6 JbI.#\~7^ .f`ut98$F+0k&mf6'Amh>a6N͆\opOPe}@,<PȆڭ;r-oFmw~7e,`/t [Yw:R:~y_~9TJK7vD.*uV9FSrerTIt贐6+L3rLLUx5ŗYiqW<]W@p烒+ ǒ.EULk$uX66 ZN!SzU@h`'燒n<uyc3N{Q*ǻ>?ӝz\N3Y{ 't|}77z<ry Ρ2ύ55(Ke(CZtxGkHO~@y}*{4/HۖXN8KZWS7u3IJb:Z[UzX{ddh(9ڗVJΌ+IkLSN:2SzO;r׵-6y'x|) #qj`9I:r#9# B IFDP& .H+O.;d'ݹ]ᵬ_{;'LJ(]):UIkoQ'O3<yHx+N.|׳,u"Qd`,Q^rr*u@5%/rdX[:!]q?qV@S 6ۨ?H1I !*054K&`|$LJLN6IԿKRhI-17%*&7` }P n8Z1`K~x+6vTאi_m nm?k ’gKy  [ɟ+eMѧ f#aIy |PsyUVa&|eq]}d nO2x߻NI;Gܒ;P݂K&щRrdgapy4 5kft+GWJz1#s:(x\s7]Ҁ<q#Nrt~. SV^} %Q}m5WW[e9iD.~Ӄ_1vAs)M!8>Rހy[{yެyYf__fW޶?xJ><5ŗœ<蟜FV]Y("ExQ|xC;k`4czcOnlS7bs7+Z~UUyGfp8j1FbGb'W}mncmuz^'eGR:V>cń+ʊK~{ٖg(F7l&fr/OI{ʿ9=w\7{s74F`&M"p'CϺ7?ޡk[koѵpM#os1JZ3kk@Rx7m2kwʦ՜9ò䇩_!f|^ߢץT,bA܏lZ)q-|%vX>*O<뜭fsvڛ0\<.n}>Ǜ6!+2SY+U$c?9a4:IYӯשuTFpilD"$+J\GV,@^Sث# |nZSTC9 ka6Jvę7!cPDaN`Ԏ)mBUل: t➧d -[VԗQꡗe9`o%<\Re}#Ɨ{ J>?o3X~%-чv̳B]좕L3r{^z#jkg$&29/^^uCAEAr"`@"!]BM6MGtA^_ ][έ=b-6^~}~u;uf75?\txQ2}gz1Nf:,lün%Pn2ˣic~YOxR+`;Sߟ iFɲ~̥Ji5jxd!4Yh@8 rÝN%p6tH 1&X]w.M2|V 駋Aj珀tޔ/5,JlgCp\)fMsq%F RVZuYR+} ;/O{n-)`~sbܛm<cEhϤ70iG z kekۍ#Gby 006o2 AYcY$= EWKJI Ò]ʌd qSփ32)ytTs]Q =CTG5Il8"ekDJR1)p!Wp%_ 0BHBUWbdmMbWcEpU-D[*!)U+jQ7sv(ln7R|X4+Hz\y!ʬ5xO{Ox7th'\GCz FO40858m[)n/e!!36Y\AQ͙5edTN}=z6fO,rU0Jjsv(誈 FA9Eg:3n칅to@!]|}faR3to3_ SYwq¡K1ԅνs6Ffwz5 9W?|;Rմxn8O햤濈k߮%U[aٺ~)םi=giCTz}[-ٖ]=iݯNWw]=}8U[wu\u%6(?XE}ޥnA:(v6/݁cDː1,Jy :!Ĕ$d/fx͜x0""tFDqBĭM`h BJGވH %F RhjL(r%6fbu yԵL\.;C)Ik&ys]9{l|4n䱸h:㢞pqm=^\\ MJTE5Y)A.S%fℋ+\ vcC8<<uL~o*q[ܔBُƂHE Q4s]|-/h."V&w(!#w;?;uubh5Z fZሾ۶2E!m:{SLc}n5#O[(Y\P !lr$$,VCgMmGǽR벂?qÝns\_-`f;>9>v>#Pd4tzt:x!M~;x;P,<!i99nT7gus G5R)H.U劳EQ.JLFXb:F 2VUN}%oDAB9Ÿ"Ɩͮ<ѷ/SهgXndD1_b:2YUٚ>_ W:ց~ OqH1U el\RFƯB)OX駎FB^ m Ix mBKPi,r\)cHHe 5\WQKPP{8U fV 5` UV! eg7s1˩ 糳?)#ʈa=nw!= ͭnn^K 4f!Q,NJRjY ; 8ಭ]=D5cc\G[RHf\CѡEދř$N%{I7sv{yVn oZmt?)u)|`,@ ~̽zjd9yRih)B,{ț#C]㎻L]=Q=w;en"n?Tkzy1XKK08[*0hl &9UC,T(LIƇH2^ Jo Cq'yCUu_kںض1keͯ/iX#&fdʬ(䄑BR’s WBiNQ{&NpDh}yՍǤ9&`x/'1ŊB1k,z4ѩPID~ʸB" "`#Mtъڤ2 5!{;|7sB0 &+ /wQ;݆F ;4olVrSWjXk8(MD`qQeHA Fd)! $#}<gg#L֐ǂu60yf}+Tk+ϒE TRU\l y O멜~t418>0r¦PXr}6y'q>~[J5K]K(Oќ}'L/.ġ]F)JN|7^5zx 90xUBj|$lSK֮Ԕp:̾IqH&ꚛW\'H,cHdZil#pYMbmj̥3~> v"Vp7?p٥|99ܝ;JQ͠U\o Vvwq'6rn [.& .N}/EG=//WOn=O}*M|Yb jk)iS* GNPm$8Ҷ` 1W)M< Ol]SuN{(~_)k( ܥlMK1m3xݹ T5 ^끥 s<^u(껉O׫~Z܎pXa {\/K[v_b})Uf's\Ij)j@JuPg['փlN3:יGc # uѷz]0RsbqJ9Ic  $QWkґ l 2-@*dm, "LhXp V+&wWœ)n7sI\B{xzj$czjkK)&ͦO&S7ֻF lMb&B|xo[?Y#^r6Bx ?ޚwoiߗUű|lI#x?3~?4k}<-O2/9eIJvY&gQ9|]0.ǿH|ؚ.Qo,9-@PCWMm=r?e5_˸ׯ~_~o˓:B{}Z\\NKU.᯿rer}h*"׫˟~ ٵ糳+ه]W/6a%4o'% cdAuT\Qw1Tc+&bRH~t~v,Uy}(j=,U%èAp%@Ď bt 4}랼s~>S~b;W7;gL2T-b>`9,J ]Es)Dou"]-mRbB>Z7zxƑ{kjAj0a m 6l!]WGaRT`G|K;κO&=Uޕ5K{s$,U08wjnx Ę"i."%q K'cK< i|FStr$ "bj.HTPNYfBTL'U_DkNC˩Iɣ PE u*N;Fm|/P?,D*;v1*|{=ӟ9s:?PBv=Z8vm %Q‘nMM?jo|؉ߦ68Ղ޽a jR;Tm~<}r"9{( Zj΅_^\NF](BErqviw@-I2IohK ojU3lu3 Z~EU0(zRZiK<-n~k%K>gdԡ$ 3Gn}c~`/ C:$xbD ZsᑵufЎ. $I5b+v^ $qh - CEj= vq i'6`b*Z;E .a9"h=*ruPWtb+^8j,Etwg\=eC.% ]{!<焣N $xh^&(AsMb5IhxZO./k.RRJ.F/<yy͗`X v$(H9w X#cCj"6TQG=Ho/*߯ʲ*Ȳ֫]%]HO7<2ҵV2LєQlsK6gq8f0zYʅvb3Za!cgyMr3u-z9[:`;D%og IрQmJ|Gʴ8޽Sװ#,RGY\})7noIgd ̹f\8ZpVκ~,xWxy@Z3l;σ3w E1[ߵNɢbuFtfecKϥIpG*pg׭ͶB,`fWnx|Wxs -j^(^fO_pmt~Oݭp 4ꆊW87wk{+X`2iB^K*x萭E˭/dk-;, qFqq<9K>Zj:ȷ, |S]RMΠbs*ч1D(A +54rgs:%* *7FS&"ƽ`cz,}m/O2-X!f~,=? -zmWe,SIW}!Z^`m 8/U4(Azr\O.C)zMDz)< \E9!h\+BH;XwYBp~)o9S5yw>.BT>z<J1_b2I+/qޅf|z?qq[~"\LcLhʈHK8_>XWCp0FA(X!`@x|NQF[Fq&E'6@$SB)dB4VHu@LEl bl8t7|}z=~ mg,+c#CcoƗ2 NLn>gZi xɈޠ*k$h FiD*:"9aEcihښ"uI{%8zl\L3Q q]L<͟:mjZ/dY@Cl4]OίF֡A9Fpbb[4(Q JHA, J.( N%J l`" 8PEPI}T1iHґp+ nG'q+-oAt&.DWJ͈`b jӎ@y}+]@aѨu!T)I" 7ۨK<_)ӌa%qZP@ !lkB$^ȴH6@<QgٌQ?v< b1iaD+"Ju Q Z;@.(_!2x1ϤNe/3h!m(S<(`) k2Jxޣ%@q(_Eˈ!.f^g1-ya\d+.x,:/uWXK$$+$EK3D4yjAQfx(xXL;nx ୩Gq n\W_Q2NUqvjdz8l<E\Hmp\M IRRԭvN@;srBN/V;(W*e_NAV8TyI*+Q0U)EuOWӁ1Lq5:mSmNy㨠mJdĨkڅgj9v@sv`>2טk:l\Ƕڶ͠q;/}a7xBR 4$-XYbEwTzAԇ`1lRRB.VlDg {>2~ɻtB_%Sh"@KU, Hh_*ST`s M R^lgVզ> Ѓ>mQ#֖VG_4OK]$:,Jo]GvJ(T+5mu{+ؚzf cX}7K_?˯s$MBkƨbHBA1GG*$ǃSH~NQo5})3>ƣ 5:c}M51Rb B46WG?0WC̜ǫ=;.J@Nv1?NӚԍyE_NZ /{Vhӷ#rr&7/#kk h8$ȁ89BP)jm,χ!i> 0y)VBXS,2KХ:k]XЩPY[зxD+^a\fF_b`ZB;\`MSȷF;KZ]blt%ʽku3gGyvVоқ0|}]ۭ@۰CZlM-6<͚9Zw<ʐS29.jS +DlO$D5ia?z0RނWI Cd]\8 ?I?b͜ݕ  m3HQo4m7P#; & ")}J(Y͐,mj&lr)PU,7&U#g@KxHNi!alW7НME<|=?`<ٳ r9hEsX g|wiv>r6{Xַ4ogB54o}xo庻&-"n2&a|}A 1-~x%Ktуrs+-hp֜}M%:mB+ jˈ Syb89sM5;mu8s eNwaz(Mɴ]Mٝr;@Z5TSA 0)_/Ex5 ƒǪ7,Qj٨駯~Ak`/~|:ydՙ>Eip&=3XY^ۦ-&z;N֥5u$TIW_Խq#zQIrq*1̓_7Ȥ&9(9.?L~{"}OΏ,=1Mϳ??Gj W~J/;|:ZZ =_Y] ;G۴Fy]'P#41JOϻ>marG~=ç_ŃݓzFi7TB淋rgutl+W7upUK2j A0`!3$e}۵GWuT|]H9:[,c&,0d*yJ"dkê9&^qJ߁}3ƿn_.Hd]5HRkM12CuZҷkuP%Uy\ya;މi kNٖ\H+,vE~;b._򺍞@g/jsN PA`$DIJ\VN%Wd.m^W{5bC%"de ĪR1+p!Up6e @BUWbdml"cEpU-Dʐ N_W:nl'FQ7}>+=D=Hkz% QgV*ށ(wCQpuhVo~>dt88x(qFOeڐI jK:bR,ľ iNX KLP P7[uPA,ȧI١XbTEC&v`7svLa; wu%[forcK,=wCs]kn}g^^Q/4G/{noJyE'L-:x+ZܡעkV.8>L-:q[)H7ϩa,~|a1k:q(л:kHgsfJ~jǭj` ek-,+h\*Y+d" J\,cmym(k+QoVjkQRX:dɼ.eq{7hu -hmSс8fe(*vTJMq&2B\}s[SH]Xq~,)h35H5dj!^ΐL*<}?ʳ疷bzx.U:e5Y(Y-VE6e]#B1p+*V+|*Nf@B ,0I&oG(5)0G1YW,fWbZZQ_~|%-Sr\c}sv BR9ۑW醅FƓpU%t7+a_3^#K0c$$OMY.nN_^V3WO!q/#FVb+ZR5D9TCdwv:M0bH.oW 6319d#*~ FWs*;e9;]y(mtj?JVs |P]CP ZK.S>U 蠕y*+VĒ,x I-fS` .:d'Q0J^keũ.z w3g;NPPCAn㩈FDq#!jJhBފ4rb+ oJmQ9bu {ԵLWxSBIk<`)co^6wRNE !uv%OE#.n uI.-ג?Uq%@<ŬU eD@Q|Fۙ)8CnP< au17F?Rq6I&'geQwaW^C fq8/C49PY!m[`te2 sԦɳOy/wnOOOpY.nHC5zGyZPcKš1,bs\P #RbRd"sVX OK:kb Mk-]G%ב?ƸL~;)KE6S9:h _=?vm5˔ttn){w__/nװf+ n=¨YwNoIfr\X|RBzXydw9vv}XއZޑǻ!DgCeۖwXGJTɬřtp{;8yCXRC#JJ}nFQ&e2ǕTBp3\v6VXČFX^I٠u9ZGM. B+Ap24բa3.(;;&O˂BHFe.:B6"lc!sT.TS,ʣjdBPYy[H^}4>&.3E; kmHeMvw_ 8 Ξ*HCIVΟ?!ER"Gig]=SU]U]UP8swݔ+r6(!_Wjﹻ9Gv#Cۍs+%=+q=rmm7k+kK7Muai͕F[Ⱀ"CDCRNJDa j=5 qE8&LIxk J;fܝ | \K0G:3r6#IK l! )"Cp7bdܒ\؈O 5EFR؄8>(*B6rH :q`UD& c#dG(o"XZ/*1nnW#KP$ >ӟ&9YmEBcKNhݐM"Q]46k8CrIːP3aU۰ oF)nue2u\9wb?b8^]٥Gweqj)ɯ<q8 842"y˸/%aEEnNL-;8GYo|~m^{/e ٱ (Wͭ!|$8Ö_DEE1L鶋7 )߹fΖ'vy -wђڒx>'Z}dXO04|* _^z+x0:LBͳ \Y=:6֌xtWMc=t67>3I jִD5wg526#&ӡ%oVL4g !;DD}_lhS6śZ'38"L>jtЂP\ĀgLSʱ0D'ajfȷ6Q߯@<ޠ7>ٲz XNN84)IHuܜh \>a^'J/qgeWZolZX;hn; ^Wp%G)5;xX>PwF|An 'bɇAWs.{UcuVV:]d۬mke+:J'a*!ibphX1>pr@ '˅];/O?ӇONx:=ǟî7x`c8j#Im7<'>aj0tjS媛ɼ%P|gڡZ _zGrWC=srqORL'p5U?.SESܥU b;3:C n!|hK<ozkzOifKT9(zФ 'Sc`iwY2K;gٌLɋZ$=.IJIեI74BJO"͡ ȥeF`/(4*/@i@iZa ~I7 &uD%ԖhQjcn߷=HoSµNj>⒏G#o1\gs 13cS1Lv(avώaRBǐ`!+ QW\!E]%jwuTPfVWG]QN JRz(*Qܛ*Q9duzCZ{`c`!|.# +MdYp_ r~? Ǔ[=,#~aX`]iW!IEF#ll6 rQI0?n\'XJ~@{-86:k|Dy}{HM))V`U"Wk GxUd5+IU"XN9ƺj)D%cR]1Խu-Hݤ8WfT%5oR !)GlJ 7/ Wb<#x[iv`30fҚ'Fx7p]M~6ؖ;;pS97D޾d&4ę^3\MGhO[ X4R RH)1IAbza$ p&ĽuZ&0Y zgp0tmILªJ $Cdh A24HAr A2gh A24H $Cdh A24H $Cd.Cdh A24H[iN4 Gl l $T;T Uk?VV{^zK"8p@`T{q~Xsv G?-oFi; C䌂Y*$TɂĔ#"ᷠϾ.VeLqN\ʶV?c6qB)4a #Qҝ$`"{&S=)X{))qJ7DŜ(aH) >%th_\mYv4yܥ eZTm.5O"3<ٮ͗eh/ޕ>:dGmsNFTQ#BjRj-IhLR8Yx;d̾f5ul?˚\Ҭ : ٬,X јGyp m]T@NUޒT$ˢB4x /@*K6CZF7?Y ] JKxum/:2 URLzPU'U0g$UT7zUcdUMTj% de UR1+p*8ٲ E+12ւ6lB,bWBEpUZeHZSW{pĨ9}:$)Di̛=\GxveC'^M?kz=^Ep>oD}Y{Q߮_HT x7% Q'N z F4888-Z*noe1!juH 1VOqM5',`%&UCf%KiojR$R]$α١XbL 69s%6Xx|c.MOnS۲u|Zxy:4c_jj>|ѧܵtu`#w6/eqvpf kx)bl{׳=l'e}7JAZzxn5*倞7ZdG뫻 !s~r]˭+b{:X.qnnvG|?z릇X`m^S"ۛy c"8vρ:4FmFڜ߈c~®_O'B8=k:qb л:upQgsC9vjw l8V(W!JcJ)(j`4Xs]F ;2jb(q5]4u:.Lb岢 {7sp3#warL/oWF<b4;>vC6XloZxMES䐝Vrbl{tNѴ %xrؚ֮( !B _T͑b@5{7;ũzbgf~~g}?nnsڻQ9Rkܮ+Fe62(g,o䥠W偧Ae}6"#Cb6erJ.Kw2XdOEP>Ӗb;cEJ&k 0x dRI)-_ 9f%*Б-P5ˬSVG+sRCLV6e]L8{ҥVC@R6jV2+lu$N⵱A+'S/vnwr>]n -- 4;+f>"QrUtAH\*^ mjbDaoE!#Ğ]eRF74N䐽h;%Ic(\ͥ^G4 y*mvjoDn.˛k 똸+=b՚\rp9S5-CIMZɜ%ujXBf(M5\TvUvBKjYeτ~y b\DΈ#"x P)cJ+&J^{+72Q>+ZGŢPum88eo ,JZ7frrp9 }E: .gbNi,y..θG\qq׌WKʎP_|U5ZVXT1AbD3Lq. vyxxxp]<PvяoutUY'2_շV@6!֌6 ŝw`{6:`VaH6j|1Cռz8}>CzbqmE-r9R #ŤE|VX Κe+]tp}"W OpwZ 3S)mXGFTPKlL& ߯?B'~ˀ@Yax[01M˜&$d6kBb.V岳YqJČFX^I٠ud`n$]\ Bw$Jh;͆V,7ϙYL%|H>FVjoQ-W1l>Z\ 稜k֫SZa-`&]/(AՕ,I϶>,Kղ$sS# W!'!mWԒ% \< IKg}yIlLDQ .ZDVs6`l0ن)s($PTSCLAeoB!uAU.Õ,3)B'w^p۬\]{B2e,2R^oӼ-Ma ɵۥ]Tce%obH:ŨJRޒA Z0L$6(ZA/8()h>@堸w\7sLp򐂜{,!2*R1%P+ڤ+h6֥ lAAZ8U{ۯcjtRj\D0łb-"w%#Ee)1?SSni~'fml=ر4>x^˟>`)an`v 4>ڂXDeT(ɡE˃(:Jq%/D<Ťң \*l @g {RQJ TLA Kţ;El8c8^TXLzgSe~f6 ŇPBܺ+y]X٭ɂ<_%~O+wq zw1Q8ljCCeH~@pr/9<_ƎA^k:Pߪ0L) `œB}V^eڒf`&fn{#rݚn'm_,6}/g.nvzg5X}>`m)Gۡĩ(10E%,99Qy%&`H:"Q4> j h5P1Q r)܅tAv>t*kmpKU U{v{mZZ.O`p6F{f[h#<;[ҷͷ΄z59om9)߰CZyvz2avGmnk%@ T'( "A1:Em+SH5i~ڹx {(AgԓHQoew( Roj 0&Ktآ6\\jLe\ Z9Q@*ѰKA*0tNFk|`Y[MQV|Z$[9B)v(D&ZY J}<9|` .5_-NUܙ?-|^dlXX5jHXuj_#m}Ӂ}?4Woҁ;??uKۺE8RMͅNK*y>YlEiȭ5Ou: - ._$N_tv& Nm3ƽlvKބk^ SY:6y1_?7 9_|K/ ;u>`g7_ϖo2;3Ygfc(-d᝵p;-BXfyΠrZ0~#. Yz !m b>nG)N#1wNi{Ax*Ҏ(-xYW?w| wܘP#o~âF]]yF7ە=YW lҪjz)@3iÇ9yfiB*oM^;h-WosVņXnFw?s르^n+~j:.:dkU[s.,lPӆ l]IqfĢnNjxMrgJ5bJKe36w#Z5TSA 0+o7RxV˻Ǹ^^ey;“Ǫw?Pj>Owz*?{ƭ OnB ᪼l=uN՞$OT WI4%iޭ߷1$2"C$Қb+!ؘ5Э;-:TtDp+GWU\iWUJ^!\IԂ hF \RU pJX`Rx4pU5Gî:\WZ{~઴}t{xͪk{u>r^lxVFwt5Y}F7 %wi֢ Fmʹ?.r9Nrg~8_EzߗjԊX"jל>V=ҹ?L8nlr|OL{qX?w U{4}*t=ߍۼթݬWVBAJX?~F=44k)RQilr/oyK~*^hRAL|*M`%Eh#-P#U`GGÅY\-Xp=pҊ B.BGWU#+W?ñsOpUUJjWWF) hN:\U)p jtL#,s[ţaWUZmf{peͫOOۊqm*'utg|MM;&b']Y8cn'+d[=HX;Ɋ}'k^N3kylOIT,1/߮o!_%#3x3ZK:Yn%=ItzqȻO# /{n߽e ש^?͏~ >7 #?l^ՊRQ}dnRw>Qr{ݤv*ʃomFYc3-eb>Ze/Jö v7˷.1UnEҝ&e7?iS\?\m5ԍJ6e!k%{GY ;)xj|/D)BP99DdLTT^Q9(Z[sbu"BGD,'ů]7q6gs3B7+R]Қ|HyS]N>M Xoo! e]gĬʣ ^3N>yv& }'o5'հ?UҺ4 WL>؁~sJ?SF]Q؄YX3($F.$7(g5Eu\ة),*Ma45ң̞TJLy̮gٛ8lfb6+!r?^bl÷m.+n!\-٪TxV/Sx22Z>KC䓋F Oѱ* VFY2A[,:%PdzNGOޕ!j-lT =g M x6.Jyshk;|֣|į<˗,I |z>ƧCmKeļ J=8(Qoj(&/go\2!|u3CN:>R#R1@V$(db’'R!XHZmyQˠE@  ) fE.; LG{gKQ>µz7n?DrutJQhHg(2 -fEx,D&!h%B^ 5N gb-Cg HTZ!bφڛ8 u״1-ͳŗCқ{הbm03;~N.8US2R\?M7G;j"BU.$eBAQTEb .w~%- {YEdɱ NT z<*AodMW {ӌv %s[|̸bv{y\<74~<^ϟ3#v)6a T'RIxąl(!#F*{dܳөH 1Wg]B6a11LVd vDљsRAt}t\CA޴ P{ƀ}PM\JV2 hst&lm(kA>^hÒզff5E(lkR,)&Q $";τ{g3NU Ǯ=#" 8 rvLKDzفKV[FHT|!HBDiPMC"YW,Rq& #{0B}7K͈xyUy2CMKvE3.\|Lk-" ,B=xuI}F3Spqiǡ ® t4\2_q7(9 XF=n_(QG?S~hm ѐDCV;=qUSfqiY]Wnߝ{x5U%wy@/;_nA}X Z(G׵0I%D 6] h,*:F+Ȫ: l"u(㺎2^9kĘ6S!Ӻ hյׄ[2v4i~-g]~n~L/'xt=oX=Jԗ|q+ޱkZ?;wt{Mq+w}^`wY7C-JQťtP;V+@^%{-Kmչ)[uGS5DoR Z^c^2Fi|1/}cfcyփtޔEɗ,&D_Y)2nd&8L<8L<8ʂepְʈc QZM!(^"%A@+0+^T="Pv@J'BBV=$ɨN_l>Y!-pqu64Kvu ROKc>fCyUT;Nrv?ꝯ).ެ<}={ 7ɛ&b`-n~#yx@${"y/κ*@h,z%":@׈2i׎)( b UψB dDVYLt>o.t1()g"! а swA"}rq^S/eY%C{q*y䙦Y]V7tEX;"r}ݫ S5'6 4Y`lV" [P9Z+5 Q!)߫$5>gR$(JlRYf!Y%9M-ܧ d)ʽrK%u y6AjLJP Xr &{QKa_ BUA+%-LkK!l2!Q_ Ix_F:,)wV1iXj5B)մ6VQk&GD'R7JH5 6v@66˱%lUU⩻V|d3h|@' dםJ0~OtQz TXD7F*á>kvڸ|^{INnIcQQ{rBȒt퀆FdIu LSҁ3oO8j@-s x}![S %#(120(]/q8HT͟O݇ خϜ?Xgyon,]~/tycLrǁ;RΎTwv2Jk޶O&OTMϋQ`@FR2YgkK'ّ@NX3TPK .-34[bkՕ}CGU sNj}o$^?o]nb[Wvyc)_s0OW&jŖhZfHm6בv_x|jfLjc>OUWg].a(Pbzì-}#u^d,lCtq;历{qq; 15e*mPX~4b^v9%Rv`K*nެ -[cM-:lv[qdhPٝ !{K}x5M ͼ X|͸HRd \|pt8' b#=FL w^suVUUo{ ]"2&d ǐ<\ љ2D }_s'xV[=[6 )GzFi+kD\jSL]cӖcWv il5Omΰ5,v2/|+.tkǝ2q1mGV~^L/.j7b̦F海8uLgl~°RG#1MRwkiڮ#?$sIn/L-9yM6N?`ɝXVS^XOeLzX#> W}Keh}x#n~_Ibkxwdtfks|q`*aѝW溜_5CswIzu _&Uɧ|>uF=Tx\20C/LBeG+(5յCpMeӚ?*w*gT%TZkҪM*2~[՘dU.MvҮ]Pm,x04k3&"0Sfm:OYXXD)۹O75 :hA\{.ehb3)@aASm鍜|y @4:[^Kqg,)\849{(Inx}Qþzā.^0_J&xcLw伽[Køe3N((w a/,"w2'njmϽWoee콗GEvQY@\2B0Ol{BO:`yزy>|Tx|C(28`/bᰤ "PMmXN@ 7+O06jD^k[2-s51l D*NyRY#^x9# am(s@]#@DJND$=Tfi%SpiZoC: pzLq|!S/M؟Q9֫hQ<[/^ZiH`(U\FDE[oZM8"KQ[fgm]⠨ NZP\]Hۜ46s Vg YNz\7A[]Ùw"s>l"ao` ) B/617^=-LYxR)̳pӳ&O0rylA̪; ԁdEP_ )(0\pb8tu^+a[j`M@ZX.*O.XM0 w^.SC0B/0p]`:J0/fC~NLWrθYE"K;vÊ< §1-a,QU:+';\z<tdtq/H1T% _:u:ח_y}1Q篿{}+u#0 6e$`3 ?F߁;Mдjihn*A5j7i6%]>RO٬Ph;n ҳW=3,3h:e2KMgJV^+t$* Hd*Jܕ/D hbG^ \m5iJ-cQ$ \݁/;&2K?+à z#(wZ]je:(2׫E¨&e)`* :ix:Z %s"%4F*ums^,?xnrx-M/Brmb:x1X L8+&F(8n9:(xbKIڰw92N;=IW:[ħqHKxtgH(^v\ l*Zއ/?_Tˣt~) ?݉`1/Pn ӻ7;Q+ўyvLEA@E!T-Y Α3q{WstZ:N|xATҖ6ݷB~C˖r'Vdo9v~fU/;++?_u^@=Qx a)|Ox <+5>|r7fMH$oOԠS-7U.UJ |c̱|46[Y{ɱ6$Qs-9˝RpMm:]r+/[8(=3>&:CO{Qy4q_]_.b]_6dݾ+\^6r ߀Pl0ȅ.*!˦_{{m+~}v', 3E]r+5٫oCo _}ˉZyf¼r+m5'N+ E埽&M\BQN)93Z&zG-2pԽ4UEY-ZV] @l[$"-|5]x}R%&n3xUT< p /ln V:F1%-}٨/x_CՈFVlV."؂&db-+C GkL ,`v`,C0!B (!@ZD佖QDk45D49 Ho|p[xʕgeda=kKνߠ5lE|ٵE ?V욪]xף*LC6+I+%k@) \LouD:o* Yᥣy'ڱvީyZPu3%Wà_Q_v}X7~/n;=..boUy[X3^{v2}M8+TsR#eU\icr͕/#6|XxcmIKdJ)!<$ARq" K1Y Qzc5퍵DdTw 7Lud5i#uÁ' 63"í#Q`W3$!DB 5H$V@.HY%' q &@dLʩRLYh18 1r/p1(PyAm#ᾥJmT*?*pG%!*zHg#St7@V@a.C:DLᅒ've| N9ťoW12#XPBi9%csX1Ygl% (V>+Y8?'dķ|ovF/ t;.cN Q(0ydgD(["5zܰIB\HhZK(Ib Y$q"LEt!"'UӧM^b~Ϩ"㎃ړZ"%/4wIh$1&R)1p&Hi#MjE QyJ☁ 3h A )7RnF:Hc^J}탕Hc[D\҄4+]+e%8/), )Mr9"Tl  @7QzHLrzR A94a#51r6p 0ⰾzHƸd[H\\`P>x57GM T*,uXt ^F-OaTrvX ^6S/SѽSPG਎ӣjEǸ/Gqр6%<`s-:'?,XBfzϽ?հ `o5'LAX)vtnpWW^B{%W^]SiU]ʫ$d[k Z0K<֜?F8_{O#0iUlj&l{pP ct9S EGmokpH dØo>wo14x0'5\2o.磸RŹ2כB_Ixs7PWP]75w\0\Gں?)@Rr`@q[ O Mow݀z$, +A`-5t9Cڜ!3y*|4H@e^!oBzϨ!XGywmI_& c(ٸdC3E2|߯z$EqHJJ]ՏUuu1|_J"- 2]q˿"(O}O0ۺ4`0"c#GLg!W=]|n~p!iW#6+2B~h=nzԺ~cyI^N//񀎞qUQoޫkd/mA\k$X.9a[+YypCߠӭ;m,rpv5AqYtJc8(erţ+M g}dHw^wd\JCn6*`?uՆN|+zASr,F׏8eQF'5)S d +k @VÐDi,1HB x;i/Wx2`V6pŒLcfL 6wP$Da:0Tj=RzEǞt u^J]pJ7biA Q Ƶ5r600>H~Ӑ93Ғ^ɻ_%5V7"(ͫV w0[aASGDЄFR#lXC kSDO3~CO8(zI:Fmɇ@"F9x8 t#AuB519Hm$OuӾX|0O,9~ yUIK.1*P9$:4>+q^:Z zZ4 ::v=P1=l /!W6>soB{PDd.Dc5K-&Rb@ Aa^X1+wst!=4:+VKFt2 J1O56ed(##Ws]~'ٯ gGE2XU-K1~{ ue95Y%*.͚V}\Фb&\q/4-ӭh`"]9mwb7Z yiMLwh}*rwS* G 7e,P1bAv%䛞/r4 CTsܜbOlrb\j&Î|.sHx q(Z?:pΌ~{pѭAA1LCSg>'7F&[oo`FCjry\1ї{"]^CE|.11a鎳 fw*Od9ӝ{ݗCV\e`,8+7W>O|:uiQϻ_; Ta7ix~G0^eJ8~/NoH_dut2S}2{~FwN{(|ze)?.~PoC܇9P\vPX|cU9ݕHQN0+=)=—a5,cb V0Wn߾/믤~{ 旳TӝW3]{'';wWW}8_-;c9(qHQ%irPn(>az]Ga<l)Z϶7ngц<6:vDZą{/ Rq7 !Baz؋B؃@Û2ױ3ve / FD PμcG@!."^tQY F3X-3q(\Z )R a#3 qu4D{S(sVA-020+aX||5mM^WryT{0<8Pmn<ūuz\U\HqbWX~xHu5h{ZoYU$BSZݷ>`7f7ԃ&y]\f\Nj Nj*Y僮\2? mDct9S ׀{7z7nkW{D=Oѽ,.ʦLVULƟ|0^Gk5=l`3kMKy u/v=~:/,_nZۏ`zJFE] 'T),RJs0_]{\@+_Th{!,ȿ RkŦ(L|}cYə8xm._Np3e~L$mKB-)۬#!owo5VWf--,Gxw 7R K%R-G!kV lx )1]_^,0kS K7PGgaa&0[NYkw'%:Y?;HIvg^Jn{N%er@1Y.pFovGΣоѢj[ U2W;W;V >9Va(zbᰤ K;N b9a* ոݔ_HXq) p! lM:6:G AGI[ex,&9H5I q0W;0Rw㭭w_ ?ݾŸ{\/@[gpf'K)l"a?!añ޶}Jzu Of]Z95yeQ>f dVEV{gM`]HA)ȌFvoFcWG({+:m1d(XK )& E0EX_,>NgIvGS3_U{,:9=Ѫ>q D:rT֗ al pWK [#jge?,u%oדet%42"gA̕nⲜYg3;g(e2.N/Mo:{Zޓ@H4Bf.骩._f~2 f0b:M=ѣYxn애սփli qER9$uad+f_1ݑbШ@rEćN,\gp o|vǷoҟgߜcO8=?{N ̣&0ޚ_#oA{tm5AmMm߻#l mMOւ(2\Cn]ɛlvfO1C2='Y̽|ejE(&3Q|!B Bq1Xjq]l L#"TEfw 8"kby%'[-e-ٜA[bSdSRFRԷ7Z2p F܁_w0]?Lt) IiZ_Y?L%R^SWhr^bԖb] V!! Tj"^ ͩ{ݑ}!8^( /SU`,n_Jr9Im(9 ٘`S9OlPB`:'POI[(nu&W܃["ų$R(Ahr-HxrYN3q$_8 h 8C2UZY.)vD]cwǂ 5mayCB~{M$e9Ა^?\2N$0H0 ;Uy̼YN0U|g?}~y4:.?W׏K^Sy{;߽ ֽl ެ߷C-)}hُڧx`:+4}P+y`{9.Kiτ#R/_SC$ `": ws>K G u"]}xaj((k5u t#]mCZτZhHKywo \4펄]wF ek SyЃ*H=OR!ת RmtrX9g!g%Wo=c͟Bt&"&sUCpQg͊Hc%cAF*']kl  E|Msc|[({sm37HFGoN/:?qg8D?2vHwE&y5{{5Wյ۫'j6lx 0%4Z@F3 }A&! " ƄF-3H $AFJkQ;Y`u4-JDE` jn[Pnm6v|Q~3Y]7xt[ O `_y&þ&^rq:h5F3׫1!ϳގ;ًY~XK+#1_~7hφ6˂Q 󓳲[uIW4|o˥-xg@n;o:40 = Q},Jkաc* cBH#2WU`KGcbi¡*jzs3W\2FqDј*.Kk[n+c,^WhOgU-b/k6zn]pepv_ߣ@焁fvpLG ZHqoqoYv}W"P d 62gVrtvZoPq,A>TJi\ ^oYkk$}΢5FHI_#k$}5wI_jI_#k$}5}Ĩ7BFh]#kvЮ5BFh]#kv7jߜ?Wml2ˀ+~{ˏypZGJt݇|ǒ.K(DẀuE =z/"]:btl5C$Y eӥHҠv'SƽHwo~5bݍkx+p*[\ör ߠ3A=tfx(I/d̃]?8N7."E+s:ِ"4׎bQ́ĤM*>*31B2 FBB@3!j@!)e $-twBOul&嬫ImO!Om' vY.h B<0e_gH,5"Oj٤YviCNDzk0qvLI Ahhg /! ٛzTI ET:!3^e0䇯h(-M::]j,01 BT|)[kSDI"[/N &i}pu{ϫ^1ٷVZ@aJPj3 Bu]jkrpL'>hTLc4Ǧ9cjbڪ6^8%:E`7P[tuwaٻ8$W 26> I֬%;#yjv]MR\}#h6>x$ɢT`]ٕQY/_dFFp)0$h4*mK~<vT훺W?9Ut>, 0sC;l󩛎w>MOO9ݥ!Gϋf?8h#|(|"$:9Ws k|ifRS6HAʃC-b!|Wzr;r`)NK`#wNaY~l6[*өoxãqSo_s|w/`<6߿s_,LcsaRύ^t6m(e܅ |KrOԌӒ5K9*ڞRjͶ/ ]GE^}['uB\435H_M",}l 'I۟x2WPdT@Y] Q뜖c7M-SExEjv0 PjzoI3i5QMlن:ōu'˒럄ȱ1v-n b46Զh)GdKӱ9?<\\X٧) 慔lpCjvgO -i]: ^$0Me3G⫂P~mhl&L{)Q?H?`l.5Bd1|^ 0:j"{?D'R< Žc8ͥ ֈ`Voa`n{3nU ][-,ʼnIg8Ͼ;ԽG Uĵ;Ȫ=ѭ%6{$O*Kؕ\&:eZ,Ya1Y.< IP[TJ /e<ɄOv٥"P/ź,v&/B{=< z:\ Y͒_fYU>0#F1h^+A{r]랕!*mfBRәK>bV_\Lݎ+(7M5777M˅:{omW{ӻWo?a޼~+u#0&u"`|g~\?rUk5&(S~~iG^U\jv3km@~.~9jUMV6&2 3brq-j~Y*r v4 S߳yi 6[1CdV;GJQoNoƩQ RDEI0e4 iBH B#GV[Z^MrAx-M/Brmb:x1X L8+&Fh86lB {6Av ;AlEh8r4Cpzx+'㭺GfC#p8I`s_:P:+wR QavQK&R/0E98 VϚl #uA'!n<8ELy' !jH!$|, jS7|0L0 F"I*A{+%|JΆMӂu9Zfott֗lzԋL泳zWsMoSpm.t :9aZ+oa}IթeW?l|YQy{tT;*Q`%G⑍LaQȃXD  2y %Z3T>RLzKݥoW12#XP qlXM3nR Xp!QzqbkJ6봒OTg4(x<=#v $#s>JR@ GFh q6H :1 Xpf* =vP` Y$q"LEt!"'Uf&vĶӉ_r1ٴTvb@x3 $DR*x57Eb*:Fsgf.<̦}Cz;<[߯3zVa=G^+up&zFHя :cv`CwƤ]YmRqi\[=IsNj}u4 "M?!5)yM΀їri+@\u_a=srV%\TpUl0"ԕ]1'w֋7igH/!' F9"i2Ix:g Kkk%Ff"Mډ4^AIs=o0;+' dՍ2-;%p~!%$;s-8Ҋ6TQA{VH3bx A 8t8[jY ATpPhi?%Ԙ}'kPt*2 1AK!JP:Z+s&Jq!Q:eƹ \g@E"ܕaY`aaEm;Ej3)IꆄޡH8y /*,T`t&UFJ{&iCU)ImT:n Gf30r8 t1f# g֭OtVA€ʐ6`Ԟ t&i[$̬'Zq:^I;qlNa׃&K7C7wӣYVM, B֣Z0oI69.CQآCF-=1=[#g5ߋ^Gin*^tyW=.sO_4f%r+xeWUR4\f9~Yj^|NpGDYܤ|.iw.P2+hI;k9_K`/Y$^T5qƢr؈{4 9䖁$=pF4"h±=\."gS)i:RKoHES xqMRe?R*dʒqYAewADM,p L8WVfO&a (}W2 d8EoSh zbA(O sD!e5x0YO.({B]*cb^"R5`qt,g'jsR6_ \ cnBV*AFj@)OqitY] &$K8*wo MTuY~cPgܒ9~5Glr5Gn5Q+a;G%ݪ&QMt}10lcQq/Tŭ8VJN(-W W[[0Ud}4(*EkSk<ؓGcDK<XVG{* #m~%/U7ӿRj[&g"x[ ӈmCk?aV ZEԣ@Qۚ]?&-6xaydkoڡd )SShvNhV'!E&Q9؉Njucnkxxy,|\Ͻ7_ychYD`hZU9IG[e.#Oӷ~w4wO&otL;XQ1lOpŵPܗ8]\2d4ĺv//[i~/ 颾sH(䟳~Μ>} 09א L5%zsLqx2u?-T]hq=>Y9N=MlT+AS{,/J:7d fy.d‘`~~&Jp/UITOs{[ >~F98=/ԅSb.ѵW]ǃ&?}O>F'c i%>۷[lϬT%]Va:0MeK&fBh$̕NK(+M_OfɊgl.$y 4/Noy{EˇQ|ΟXvlY5lŒ^;^6l-g][|;'A77$_HOE%*hKX%Y9T:`BeW$& e:L+ɎCAZ ߀|*?_3ƂWb q烒m6Y@kiĸ~{N߉u d|㮼ӓAXz\ ^n /sW'?LFkڣ6R"D,L 8\Gn$g$Nd 5A p\:no:=Oѽ )misXHZh1dAGj3JxYqTYNR Oy%Pot"1Jvn,ARr56% ZqYq:L4Y2!Cn Jb<́%r"*fh#I`ͤdLJaJn7˻V{rZ&;LvG%/EÓ>El9OpƎeu[{7΅Rw9_ՕGkKKi gag犹DӴ7:kNƉdI<ҨbY]wjS&[i'|R'lR RGp ;4ZJ.㞟Lq\M$U{f]ഓ|bNe"36O.^4!x0/VHi!~#Z a:muoމffŷ%޾_(Gor:҈:\<_q#BVz sƈ{H_ksgw~o;/_gw!Mm'&x4Yf"ȇG~x94'͘n$2nDs7Vn/ĆZF Y,;h8XLqlp{lyA64V`^'#-8thm4ѧ^W,ofg!w=T3䡚ﳇ۽?9O>ן*|w_>|\؃~G_h)ưDO&O#@=;]ڶ5ZZx@ׂ9?_>]-Z˝f )?~=nOb9XĞէ?jϜprOlbY珨md*c-B,!`@K 8јhi$#Z>4}!֫dHdSȬ*jlRWe:Ϭ.Ɇh|NHP}eVCIȊHkѡS:UE{GG#9Yֿ6h4n3yr ʚ@`)djZ9کHĚH|f$Sv>]m5uoe[-^ ᐦ'rs:lJiO?,FBEH3 h \x iNy{^x9lG;'UWJ lG%[(qg(%ep,?(rCU+0u LB IBޣqq3{Ih=.[>bW/yZ^7;8޻yq(uCnuTm;W͇jmj2V{u*WFgyq7= *9躟?_ŧXH8+HN2sW506"b<P<̾iӵ_WgvFZp5u6=>LNF\l;@!#TН!;] `+&vD Ǚ-$%L:FlmN.mUj땵_x2ot)*bZkb \b" JJ!qIwM۩Y:q2:Xvb=\9Y^v ăe&3T92NetR#nDYsK#)- oޞY2sIy!+.{Ǽyv́=MY<3-r#U heBA  5/W8XA'Lg<٪7۵UsA{jV}) *9GH@.  J4!*0B,sc#R SIʄxRsNLIE Z#g=0AKZѷnM5`;s !C˫>S(P6 UR4Wsڤ嬃9)$wKq }l8[sP]9eiJp$HHS`,8TP@H2>mU&D#%&F|*.x}vDB0*Q$'ce9VcO;Mnʨ%I~C;ɾzW6_m+ &-i`v3"63#z]3u o>h<+ķ6&r%iܒ,6?$)c=Wzz|Ic֔oGEͭO~3̼0r5?LF5wWA]& boA*h]BhMf1=-x:f!e1RQ1~H)># rʬ+, rk;qs$8剴╎3e(b 9FbR.;nP5xv.x.ALwN{sZ0Bzu1OkP`Y @VF0R \RF n#,=" KȒ  rP?1В *)J"{eu%NOQ(h x@ ŵ"AgiM(,h5ADim"e 9FYOx2zsDnѩ{{n56s*9s׳=VqȚ#_j hI@ 2-<8)Մ3tAi쪈REd>`5{¼X0.q뼟"G*/DkLԻ6KinftalWA C9?L8LԺ {A{%*W{ڳw9خzCyX ROdv;r ;s[$[׼$_y6=9xvnmq09RA0LJa O1\s#SM.:kg#~6Wr MEQ}zQۚ4j4-sQU+;X]+zʸ>P8Ph$l}Bk4 C- XM 8}pԢV?=KFzL@'\i -xeR%)Ĩ5tDDJ10`kG_yx4{DՀ<)飵4 D%Fs{K?bEu|k>F8,>y5y*UQ P+E= .p۪< m18;m<8q7\D"F!8th вFΆ$.) /$EG+=±DYBDcܠ R NFK_#)6 !jS5X-XT')U1:%6Ld-1$UV'UENtvڍ \3u85E\ ܁r zOݿP3, ʊ c.(\Զp2QqgWZ pċ3 MADEnRĠ4 _rMG[˵pAb"a ;c$:'-*¬Ӯe@kl~~9o<u]n j*iS /E1P``MCN@Ko3;jB) N KCв+NJq!*N<7g˻Oz=E$/6h8{xRw-%S;8{V<[)JraPTWSWe\ѼNk윖R؟^1#2}wz5 /X |~֤4ثh`!wu*ZzpWyo]9o\s:_0sQFa kf#?7~y^8F>uC?a< ]s7We CU+=nKsSBR߯13|I|$R, ?t7a52jˌqM*J5IR40 (. i#h9\0η7Ϫ;%m*|ml!|}.M,u90 %7%T(-ѥqdOFɓa[^#ȣ}*n6JiY+Py A"ҹD3)(uh.l3 ^tBOͳJ`Vt:#B(XK )N0ocPtt1^)ẲսnKF F|X7˅H%g\ì"Bt6I:ưx6簅-MbGM.̌2R?P=xw5>o8Z[sav=;G?^T'?^e-oI $Vkt9.<|eCPa`Ŵ@fmv/uczeouU+C\it&+$5́}>T+"wp1&T*\Sۯ{y_ާߝ_?{uzӏqp1tVIap>M\MC{b[4MP^7M4gW{wH=cJZ- J??^}}wGu֏yhuf{&h$t%O^a_Tr:qMUMM'T _A b71 S<GV:ů7MT`wt7Y2ӼMjg.f=UMS;fW=7߬kA|0qj&V/crY:9ӄLoPIL/0sC,|I+QQ훘=JA >ǫ _ToD0 i}wg;._Pq2qz~ʚOnɿ:޶1q^18_ISZts=r,Lʛߖ_a3h!rFBB,ZZJL)0"29ܷCBT'jܳPO 1N(fQxb#1~ # Qj3 8W7vM]Dם-siYዧFrJPZQl&@UHZcQ4 i6KX5A<]W 6<.H GKVyF"B9Onſ^ʔE2]g>?ϠՀ(h%Q [ZcI"!tL3=Vh5kz_18v-6EP\"#,Xp繗2 "` FsL(QmPQj-Ƹ#au)$Y-`͂mڧ[;]٥op}B A o\o"̭lAwUEGL&oW U5p)~n81}ޛ0Y_O#ЄZ| o Û#t1So-0r?̭vkZlZ[&+m>%ކ?YrHB2K{wA Fc!,gԠR9ح%R/wU[-xO݂G4A)"%D$y"{F5:ʸ̫| , @r_sU5s2yj[ן['jO |z@ހ`u&["WCUz# ^c7I$;5Ӝ6IZ,6k]|kl(q?̉Mԗ?Ϯ30@hJ&E,QЁ7 Ӛ0Zƾ0@ecxT:[:$=ZZka#V`,D#DVaR/1",DĔ62"" ` Xy$RD305כbvݮ:[B-KΗj|^-}J /I'Oh!NdȘ@ kAJ9=Z*ay̺&E" T)A}VJ\慚 5 + xo3ēgN?򅫻࿃7[Ԥ QpJ+8:y + 0!"RFfEFd)(R2T>RLzSNq飶obdF/*f"TndFfd'dlZ,<(,\:Jp-?nF95Y>uz^w4~tȜ(EAPx8$B`:1 7XN(BR@EZBIM*Lfh/#vƉ0хTsT}Alܱ=EV]Bs' Ac}`"`)Bȩ N g6H,`7fPb 4d 1  D9#`|F:HL89R'0 "f]gFD"bke4*ͨbJY =K _28I$W_nGM T*,uXie)?;.N@*Q#rcN .a4{09;Ĥĸ@"qIs/mƩhU?Q|z8hƦxOE=b/ u1臕^kdV΃\`Z](9L0+^]=~q9\1E:X),@%|j܏'ָ4.|ӖjcgmI %ȗtJÃ&0TT`D Wcv("F=\HT ݊Pi8チD$jwJTl W&@cEAJZXnv-ZrTիߥyɗ+|<]CvqbKXUnt0YS K:jgAZ ձ6rw=`$؞M26}dqW/!F<-uzG п.Ⱦ_=2o%|,,r:gg=GG R?7hWWXסF_uut޶*ozp[[8m?>.v=閪o6QwipS1l0$ջVÛ1Cf%sgMJ^3+%<)a/NCA8oI :: o۱t@e4 3͡Znek,;4oRvLy-oQB,&q{[rg*,yPgq{h忞-EԂQK`Z[Z-&q((TVG'>~^:k|ה*{}|&W}UͼvŋjEby{ns7¡@k֦o_0>Ü 2ًeNxBKoNZ@;NuY]暃٤ޭ}n9Wa%puW``.>֊zpխ|iWWBwb᢬15;lE?] yEo/ׁBq&T"H-mIk%xBI+vNk]VD'Oւ^T2:#H%fq+[c0ɺpX}CTSԚ1$50 '0)SSIbU1b{,uXeBشhl&pʺ$KVՇSʺl &h45 L4C@sHZ)Jn9v}VhF4P:jJ0 #&0YSh*ñ]ˡ(΢d{E_@`+ *#A]\^]\lL:FK[!!P2hOIBx\aܜ4TMM.1ڜ1ߨYJrVWZ\ V3O0a14x7[b'R,`rci_#GZk|萖U643ʦ TCJq0@Uaq  ØMji (A4,y1sE=ePjVb]bcWB&#1K"eX7͚5=VB \.%QC@*Utȗ q*ވ Xq E)CV4XT`tT-Yq96|,iፖLTZ XbdKw%2h )7q,Ӕ+:+t FC5E_'3T[\} # $5IcqHJƬE6,2P}ƕ]b-`P]-hG!E󔱸/@1x7 ~nVor3:wD* +VC$26LHpe R(i, x=Te->N섌BeF|vI.`@@/fsP6DR. ˅:@"0B451E="C:g9MTU&@*;fPJid[QmXP (2/V4L=KHa/ R FhӴ=[ޜ|ydz67 2K6 @ԍiFs4cp86t&7@Spr2HuT5fZ[(#!eQ_M ofu(j7 JȀ5Đ 9 ^QnZ}qnPD;uP"X ;gd*P :Pt Y40fA==T}f^ 6BoyE#XR)P A᪝,Y'yˈnU èW(eQF`$KbPGb$dib`,TuNcpB)أtjEЏAU 58';S51;˜1riW jYXAJK`ٓ}ͤ:A&C@XP͙O֢~\s=k睃4Sƾf1)B;1kJO@(\u(+΀|" =`e}]HO8%0hG 4k5.(=PXm(T'Enok3 PaXg7v֘%d5]C7 sȎI9,Mv=YEH04t⹓WrhI܏G9oj#BJ_]aD;): B) 0P5zx?MW뭀}B4Y膅hL]O(NDB(}  QJO?lG?ޯ)zy '7kz{~uo~iN>ooyw=N wz{uuqy iwNH~Ͽ ?\~}zoD:9ŨpjfⴟÎ^'~WG)ڸjƻ@ߚ -$wDOOPUCCR!ɹRq~=Jk5*L %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P^%9(%5V(`V"/^ +Jר8=@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C z@/(hRp@0wKL^Z@`uV %+T%%P %P %P %P %P %P %P %P %P %P %wm_ ~#ۻ(wlC7E!9Yr%IZ^c[%[[ĉ59ȬJJJJJJJJJJJJ:Y%X ouҀ\_:xF勨YWeDyy ٗOx/Ջd<'. qYPفRRӧgp@^ RPX=dP (+x* (DyPҚ:E#PfTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@eTV@nh=X!5iX:>6w@}a:hJ蔄Kk*.\f.Za[/\he)40l ޙQ(>x9)n~T|qt?I_vVrOxb^g xޫw'̐g*Xw#~z*y ]wO*_0WC+0By`fFOqsUxPNJ74ȂEg~py -fX'# L]YEEE*'C/|(SZYr+?w持b,y^X) I yAjY\׶"8e!Uв*PX: fx)`R+)݇ MB \!S+DE Q6LWCWKj  dրlu(9QNЄ t BR+Di Q6 .G8uJEEWjRPeBt5KLm+@itut%%m#&xQ`~3"cWxWq vbmOWЏf4z]-nem{+wW`8lZXAQG8Wa.:w4Oߢu{b0eqSza! Vݩ.?,?{Qy?{ˁ 6h._NǸqc!8̰# ,jlv.^E)Ψ|dY!:͉1z?O{RcP55{)Q!vwޢ>Ne*!K [Ɖ$,K JvK Qj-ĤDW'HG셖 vBf:E66ڤ“>e}zwՔ*=-ViXbhexkQ#'"[~^-h*[je`mul ERI;\pN;CIR읝wF$DW0 ]!\R+Dl Q}t:!Ҍ+i*thm냪%%6 ҕaBdP*2kdJB$CWW%Y`9!kWLW_ ]^T_`& W9ns?H~(m{Еtئ\Ki+IЩժt(5NL҄ sʓ+d*thfJK3] ]q!& 6&CWT vBd:ATFlwT1LwS>Q=a(\[UGRRr71˥iNŝTt^quR]#jVu!2S(ZZʖ¹2bEp. 0޼^E]'J'dpd,#M*=֝4ّ;EHJS]I$CWWT ~NW2;$]7#xJzؤLpJeۮt,t񄓔bR! \S+@ŏ|~:]!JA2] ]"5]p>XB`4йt)vKxX'ݳk79.az^fr+q_ @k%Jp:q2(x +1K-;zq2x0Mh^}ByQ\H?~Ǘo^}_Ţu՛hx9^ Z-v8ojxW7sH'h+# 1A(1W *V{yR~#؍ꃮZo8F`t;sZx}De>8D6~!4;ٸsupgpk/Б @,Og?-'^."{`_r5/#VRphe /J\M4+Bt_y (aP?LIV=O:ꅯǡf7+:t_~%3p{p_o,dͭIE?x?Lf %T\Jml¬rA1QRe5q+c:D++DJR"2vspCtWnծMc_עG h?I *|B^ xN6h͙@LI\EJK*E yvLGm2?#~]8L8Y}el ;]T?!ʹ^F`?X|ftBSwsכ]?x *8^FAe68}5Kr0r\JwZOնxC)}s~OY(lLv4@֋YxǗ:OUx3{M}!ҡv֘{3 +tI(@Rbozσp1<?©[{s_fV/ޟز94Ϲ&lmŲJC-QezWS5g=Y}yz"?_\' uyqR⑖-3srbv./F*Q`v޹>Ģ;;ڱ~itp*vA('2ʋB("*:c; ]`aC'\;grZ`s$ +Э]#G21D>sb;!kiƭ%w`~lЪWFVr+u6?ϖra)2E `bJk  !h %R5?`*%i!++NM|f6U ᑚ21Lf{``pvO_ 3dpqua HCf4nL\ϧg:rbȘ,#3.Xq[9S|#6hQ:*M鬊@+ZGgg2Ҳ2^^q x #xc xgɎ=٧l+gy$RHݸhH\;4nX2͘pОi+0>0A1fJ']% mq(%u^)j4ST+GdM4P* UG;,-9d etcWk[~Vؖ`걇x^22ҊuD2'O2J׊8YzVYFaLT:P 2bUI]mo[7+-pf{o d.pw䐱PYr%9Q#ɖ_b|}8 gQ |y$5! B k%fceAFnAM-piz>1쮤'k쮼Vɭ ӳ69pC._wݿPETʻ#j&)#!di|KN$FIR4ʛQ[䲒8m`v6gMB ZxϤjkj֌ljg Pb.|QZX@N7,]_ hh44柹DJ[[+ȽtGkGF\H$f$$) Bb*yL/jSh"S-p&ohtI:c,[;95vq֮ڗvonxT4C2LR'0Z1EOƄ, <{n9#)TՇYi$Xl)zqqGW8L> C"?+8mIp}3e?j%P_6rN*&_N[fkwh6\_"hֶx@CtliI59k1s%tHV44J9m?,Z`^(HWRrTWNF傱x2(цr?z ,2#L`W{-[]ϐ9A$X+wTXI8H9a2hPtBYhO) 4ZevS!TҘr\ڀAFǰv9{$6A:$)}B[B֓Ȉu \yY3pJ 72a NCRؤ8.4+B]U8*cb^"#IMin3]b!!ܚ}v< NUbݨS䪈R9;6B$ї:S1K=IA^:$cK~Oo=} bǡ&kbl tzᘑz\޼(d,Qnd'StZ/dg{kV;7:i=#{8[3yqOrc֘"NFg)6.6xKe(1"$!$?)J~ XO#w)WϬ9^~$&Ѷl&&?G %^a@ mrDɘ]/դVXdv_P]D( ni`Y{Y^ċ/~hf[ƛv(C%VgSI8פ1ZsOJ^L@"ثC`}O``!M7R|>KG|jUY*۟H, *to^|4=DQxӎók:Vm9;`ju?4$NFW~y7߬a-&̗g W_颾uH(ꩧ_~!He L (|`jIl/ѳbHz~)/:^b Ep` _m U>g6%.io hTӳQSoʝ+x8t.~翓]p):7M1 On~(I|q%}p~)-k&9˙J*)u~{6+v<ɯIizcNN&gH%NԖ#h88 @\ &p>{^&[fF#tFדvPtN۟NƳϼ>tgǥ6_|d w`^>~»|bفf 6 lŖVr杺5׿Q#r,Wns jƳwQ YMK,ω\ج*d\e( 3L1*YrCT B2x{W` uR&HC,Ist/cxȁ\^UQzc+x[kv^W`R)OkNǫ*^vU? gū?P6/:F>7k"@Zt[˦kM_׽T; ̆JI<=Δd8R\i&JC6y.-2 6z6ʞ;[Z{ A@PE dbRŬk8KoV0>T-rIor\{uOV\bcw} n A?a9~q;kՂl0Z0 u#W^Ռ ֧Y7g#?NִG5Fk 51Ȝ!_5Z'6(Bo>! 7ds,tȐn }2& ҜRVsH w92>Đ2.Nd~DA px~7Tq>C=)s TĤ E9b(u*86O;'o+ZalZZftv%T/Evv]wLgA|/AgA|?re <=Gn AlFtCm#}jd姠'e5nkLxĔͥh$8r2b1d}`k*~\d?UnQ܈@`]8v:X5,=rWnF {gcM^Jljfun5 AHZ9\E +0)| Qcp[wkǏc0 cZ{ >L'.̊spf?ԁaƕvr)pL,ذ4&-2f1H9uoxHϘuoMZ:/v=6܏2 .A) J`RKQ$Ɂ F($,Xb<(.3A8\>%N`][o*i$% mʧwo Y\kDNZ+;ǍI -F1"'[gȳT,' K4[C$x.\P+tt1J(%4 I5$"yz U$ 2TNw{Vs >/C0Y1hs`IH 4HX3!2҆ۼ3ga8[SmM{#+o0A {l(2Ɍ",cNKP&3 zA+H[I1` {RqKdO[B<:Iy29|[E[Xz.͟ys)yDނbmA8plᆿ'JLC&4.|QXXq~]0 veu>e@"O/rZn'|?NجˣNjJx#@0~tƎ>-xgtFG9ӣ4rYx+Gv2O@ȌMw %xA 2&-rg 4꼝/S{[jhe]jFI[e#}фba~?ℶ4Hʥy?Fуfp?w~lo|w>?Y^xn&<2mK̩c8Σᇓ wk"bm͉_.(=fLo@5iiָ4x(Ze'G]9<η"cY9>%7ͺ]%]rY9`H+ )+bLcyW+8dqqKC53['~]#}Cȡ4zgWUS1oA($47.|[ϨCB4"`@C)81vöq2n(%&6-!\if&9ar<șgZuc{f:cky^Aρ$3KT mJ@U.EZ(w]vhN 6{ {.;s_?1B]‹|5+Il]]ꍛޖZՅ/ocuoH<)r:?'DLy>ɋX XW'}AөD@{|!E2RJ)8S&ym$Tp(Ɍ7JЮ:a| p1i"b+ G@2,]ԶDu8QqʺHC" JχvpώRUXpģ`VH!LJb@3*'e1W3.Gbr-wZtT t7.au⑳?/a؞zPp95<3*`v!.2i"f 3t!erN-]#ʶ3]Hߵ\Ǫ;^-).~i8cLJc(rI@nɕ塀 &_?v4o T$ e^sQ+c4Z (*04uISC ȻujQj)g|{n{ 1dn{¢Uj.~9+WFd .}\$A\][F BǬaJQrfrT6R 'g,CsT9 "YZV s?*N̛o{yJoƈֿbkotjp?2'B?sC6*QHxeVșBR_-$y)fp4 ٨z:4P`R2|)h41. ",AӉXEBѤ3iC4O3_$+#)7O9N#AWq.D;@F"!gu4hxJ!NVykWº -6}G!]D=5mna^jBg} QSwUo_]ϱd+طY-JUgp̮:L~?]Eop}eE=_=y]? z"[?۫L@2mds/C:Re "zCas\fLsJnj3ᕊrBFV{6%gT㷩4ҁI|qjk~ӿ3hoE\p~WT|DX߿~nXpXe.d.\]rrRAI.V4bg"})N41idǗ W?!R%hԡ[8p6p_?I۔׃Ji20c~G&6FvPTѡofw.9^Og p5`^7<2I~+]o0+aV{ ^ c˫R F?`M RMiRHK42`p1u@8@Y" KdIJi=>d,h WKP%؁;X:J4ֶ~QghtT,/V'f25#_jВdT/dylxp6R Eyghmy還iwl\SUGt֨ iTt_J*v}"ەEx뽊lާNgReG4UYLUv@T'~&|F-''6M;1krmaj#ث7k6h縐e"&%x܀un28MzMrr:1um7j`v<:@ۙ0Od€G(rtyz9-sI#v*ZZl) GU,cH[6ߎYBvu1VE1V^eBk%,'^*`h0jJQZE.lؗ'+(DqŜi+:(I!F P)`c`aG{cF_3հ_ۙ4M&AɔZz7ݞ`!v<)->c@|y*UQ-3+($<9jhJp NrmTVtfk2F g T/ոrQ(mb!A4,+sO{"IK .! Ih"ãXASX,p.27(Ch)'%iW6 =MufƱXPy,W>:%6Ld-1* vQ\UQ|j59% ɃQ=-\0DKNE8O.0.dE26>t)d4̍ЍMh+8:ww&%eZ`B/誟p мvL,RpJ@^kf!&,tqkܞPm6S^l\'>J]:r ;:k֔z"\ІHE ygLDI`QVu=xrS\K1wҮo.&JXY2 -gE4ru~O`x;ؐ#Ӿ. =l?1 ڜt65֥TO) E_aMlr'2sW kdilHe>>`SL灣2ӅUCYډ]V@Z\l;DHKw$`~ +(7ZLZ}5 LM޾%QfRzx?aKztL|zd|=#r8JI墢L2Q_VrVKqoxu>W^/T/uo ?,NB˘ls麂}:ay=[ꋫEjmwׯVL_~^y߿ fMnqUg&ףx|Ż_\lQPvuh^YX|EszNhxzyT s7zi^yg6hK1^iUnŭb$dڒvڊY#}A]~ړp̹ST&H j4o)yUPq 0csJ9E6zt΢HEȩ7jg/X>|:C7?7%?yh{0u0~G{.odE[SE%v껹Qdj۸_aKvkwd4TY{If*N˅WD*"[~CR^HH&.gF"'Mœz L9/i >na?,8MgXL#2$˃i jl,{@*Mr9p2cpmp%0~|FGu<|,jt5FYH?/rpף#y}5^GT)Pgiy7) .jcpw1BKfT~=~xe.fYk0pTNGdzzd6PF?]Lsw~t5'y-0vK{[4t%)̲|eh\s "ǣ@O:neouU+0|]:c˼X!԰Bq9rT!x$mF$gބ&f1D1FvZ117l87=s2n_wMw܋˥)]n1)oCЋy,ӶT 4gP6nR1[t']zTGT\D5 >jk-sPtk$'E.>KHmlx>nUm,ҕ\uڛթq >{UgLK%#ǵʋ)liR5oQUF+ko/aNyo I<_ WMOvCqoda 2fvx6%TmZ8T=X ke* *VgБpUUxfC`լYVgKV\KP4\ 6UL*(yn=XT )  p]PQw]D C]aŨ18+FYE:َ+%]HTΥ8:|JX?{gO0੒[ &Y2nFW  F]GpJ1Kѫ9Osvgrlۮ6A/,O̓!pr.,@` [`,I s웽ir7UeU.CRrp} IT&ݰ Z)Z=kBo[5Va8uˇսzm"H 'tI5qdm\GL&G1-=IT|gkUd)DDH $jy9D JgH)yӈ32tu_ OwfrRQI?/y7UY肦*L+&4e٫mU?8XG"3 3bsN9c8O,^|Z7~ hj FT\$!e_( DVfnzT]gcp6:>Vg㯗/D7۳sw]wb7B7bxVf+9[5?]s{N69>BE-ѡ(2O3@Rdij3VU]qU 錴.{zWB(,Y ިE^yx߫guL@eU$g]_~b昲ڮ-}YB22"`'mg22ܥr-Ȩc=##t 32%<7fOjL*󛵩R$(Ke@IK:/uȠ]_HekG,)XPў%ZSYy!8WkIBY)Xp+(Me5u;c"V\ZUE$:W(ZW! {HV')~[1]%5bVJMIR""M@EbT-3Jj>:Wo_lCd(֪(mv*KYEibmF|A,GEF%, E`)oeFmMI?"VV\ hU&D9R4ң^ I5͌yU$c'.DE\).$azA \Np2}]QTVy鐠O c%d\=+aSZ@b&c]&CRmJ0!hHFd v Ithl>MyxL̶vc ־g^]!کh`Bl)kY(st:lL(³g.`(>F(HUBF͐e65)V֨8fJ-mQE4M4a>Eq"ƶ0bc+#3ZX'ف w`}ЀKF2@7 Sb, mKZL@8+mLD! )D Yg͍6F HDfl8T" X:]yQ6̋ŞI<C6elL:‡r7>| k0jj'_TuNp~r&;WqgmB ڄگY:]Yg3 "%Rɞ^ ] 0$v3]mWvhPZ.r[ЕjשL!:DWµuSP++r6Ru.@WZv"Nt J2kAuȯY]ZNW9&{c5p$]`k:CWWtE(Uҕ5KK˱7=}\?%Ǎicd!*Q W: /FdR|Gye4Gd:Ws\Mp^cҠ B5=?;)[,wKD͗3c׎BQ$2L>yT]n ㅮ榍j}Q=H(d/:APjaŅԩ*χqR  hE7`5m9E_{)\il|E#pU*<£mxeT7&MP2ykfELH|ZLjiiHThB6jou{3v00c] rJBOTrm1F֖_Nb3d* l?c;9`F2ͪ{-,y׽ukr"yLg~?MgSͧ\I^te> %rQvWcэetuHRUX% xQYL9_R(tYŬgaE^Ͳ*(i9 "sUPUE(/Q,g1ɛj:7юYgXXg׊Xgc$<:#B+Δb3tEp]g6tE([$+4JW؂ ]\BWֵ~Q:kW EQ+B"w-ޫPj++gּ;WlU+Bizztk[[Xw 5+t`\eo86-[]mXlwϹDkeKK+]vzЬCt%w++uWjv"HW2uwh-ۑa+ٶ4HW(ւ0[V:CWBOvxF3%zztܩ[mWO2_8ڪ7(X:kwruz_N:j5N ![֙ gB+Lu B ٻ6r$W>i/ݛmYc'f|ȒF-Wnɒ-ɖՎZqO&dç"rg1#$0?(+UV7%-n+䷸>J#pN=T;J)i W4&H$:gŨ`i-\}RZ }L 0î \/*Tp Jk1M']c$-nzYְf'cx|LI 0?$.%ikwԪΞwƅwOaM]O8#˜f4wIޠPN/Or- LeIq2>_!J-L, []W>]hOPC"2i %B#$FKyLÏI~Lsw"Eɋ`b7im萤GI(n; L,5$q,5$i%j:\%)oU:\%I\y4wV7]%\iBnFcwCOσ$Oj / 6'K3.xXo??#2  >pe)V3zh0.>s  ӕ7?GL-Tv@wj- h1sk'35s \),|F8!h=w>Jp;Zx8WKK˫JJo@اE2FO0Ѥ{A}˜RXEA`L|^λ/޽Ě(ޤ{œYT6Je!٠LfL AS/2盜 A`2EFB24)Iᦗ7%lMrfEVXֈz\xTC<~J?NfakEk.p ˍ4M٨:g9!GnZЅxG` TKE2 _Px!glt3sBsnlpdpqvNQ-?7m 1F: lp6H] ҢB+> J5ܩŹ ^~~b`ȓxԿ/VbY3[/|ux:wz5<(ܼUf _]z=dnuwG(z J9e|S ^{>}RyWv ~z}Z+}3\Y:g-t:0ព&~,fӗ)#.ä {90"h>UaaڹjXgᙬs7&BHeŲGp`GBJT?;zAޗ =Tdh :K|7/Po}w%7!N\ٰ+ku_|4\3iM QL630NiL8(6PmiZIE YyYQ;JF,B0#Bش>HԔӴ 1h#2&"]3/MyqEk\6>_R~7Ǡoyq<@wtu3M#~1Mm|*˺8=eNYIb40`m@^YôWKB$JcDJLR.Wj`=Z`x.40#XShu(00LR&pWV`E*Lȓq+h'F6Gq3 .PUa*J&-B sĔcX"D@It*hp} VFukYzkuZlه:C"C )۳w&T'^d}  Lma|w/V,?4s佟dx!K܇1\8QƷz pn}qL;wN0*jt;" Az_FTB/=T|6: OU'qރ `',O)a|<Ѡ7p }鋹?xԇATᗳ>/x(<xza߯->૷6A=I G׷h>}TrҧrZ^ ҋpCJaXZluSߦ yWRiRڮU~_.}/Xί;KC~┨8{Wג.Ii@k}s4i"vgK<O욜4W_/R& eJ09'rbHT9~j!0'4OWӹ'{U' 3ߟli*؟yOf?x祚"9s\Xd;V'!GhI;OR|(z>Ū{@ȣsRs'}iС3r!£sϘee:oX|sjZaO i&XSL)[Z֬x7Ua~wr Q@4)IXuih >a߂^-4ŏYwCM,צ+˫''X3/lolA59T& Q=ҠΈޚB은-m*YW-YQH]Toi}1FHS}ȅD*k~KW4U%{ޔ:#<;hǹks!,ʀ.-M] ^S>ǽ}Sb9bc e 5Ab; UTsc%bDMܦ;G/bOYm󐺻!R<Қ+6x Y"tF 5NJDa j=5 Q1ф A2 c(M)A@@-8zn;&=AB3B@"SL# ȸ%OAj[ K /i(e~gӰpB7Nmb[i>pdP;x@%[LV}HcHsL|pVS˓rn/%yyNlb4K`ѓ ~ %M~ OIX~Ra vQH&R)@xagŀ"bbW$&A  h)ޭ*dGȋfy1_Շw >ׁa nR>*~Y,u{՛c8/`Qq&üyTQ'u atq[عx!x2\t*דz| }q#BDc>e\0s,NƃXcgmI gZr9Afxf1w){sMKa~cl]&WLPc C!fH x6HF7g)Oc.*5X@Uv@M "O~kJ{yqs5lÆ`\ag 8Y9[˾mi8n8Y(ɾx[z[x Kr5S. K[3ālJ[@ WLP*ˏa} B`RGQJmƌ6>fidˣV_/^-jYֽ,kڵr##]˃m˴Ĵ"=M-gRyϚmI%t[wmm~"7y&ٙfp63ؗ ^m+I߷غXH,ٝ )ȮXU,VѲ8CżTQ&rh˶;,0k(7k~ zct"#Q$@ `DI%(bKV9 l.rmfbM-\]P1`RMƘUGZry%0- :Z;*v7cl*ZPRI:Ƣ  9,@Jo Bl=T1tzi|Taº ,hgIʜSQB`ń0R瘤{3µ4߆o[ȱN:xX"8%4B3\d1$\p RNCڷR"Ihu[WRiHF7%4G*YPKz%Y3޿f>}$OtE,-\Od_\QJ9 aI54[_ :ιVVI5^e|TI*S%~X 8! WasA{ m!ipp5GsmF֝qfkw?O~L>x7&,q[Ϻ'jeEh.F͖݉, cwS G*%aa00^(Ze'. =;Z69 GlI.u\Ѯ6uXTBA(]_ rx`:A-NK:U;Ə;{~Bx_>?=?~퇏Gѻ;[uB+0JG-"$qGpkMSCxӡ45ɂ2hr ƽ->^YmvlgHO.Դr]}SqT\/v+D6q`sw .j~*Q~R>B ii܃]+FzL(ڤJ0֕Ysڄd|Ю@٣%{GIdFa%֓%|k(m0żcAHfx^Wx"v9G"0}N^5l(=ң]2F0* J}`Q|(Uhh1'm`#5~ 2?'*$$nHl=9/1_m@$=YjɔR ڿBt-S֪'-(k`:TU*d=gikO/rY2q.}EgKȜ+mˆ6ܨ 2{U`EYp&Yq6>M^&z5,sߩg9.@yQٳz\rM1LYlW3Y'Cؠqgu jX0(s 3:W!Q9_JN٨|JA Jyp JHnA&-K.(0L,T+/^ $ht4ګ%M_-Y۔AXY+wG|LjYG:&ē~IxF"4njԨR(z=H If#T)X`= B c|^k6WAm5Qېź1>Y3l8\okz/TVFQ~6IR6&y-.FeJ* MS+EFetB.+,`:f9YN@ GzL 426F؜6ka!)JkGE3^tpŋaܽ;;=G/sARu5fd^Z("F%%'X 6,CH6:DHH#k 6"2+ۨ3v։MhU)MAtMs?b~/ΙۂڍqV[ v++FMsIscB@kcX)X\gQҳ'3y|x"Jd!fH4aML,(^J++RS)-ˈ,# dT'>n6i 0n "6""4"bKum,&%BE!didE-H Kxn0&r 22ٌW,D4m$ YiQ#g|`RIɄ@4G- v~È9+Ԯ8I=pag|qɺ(Eb8!O>(S bvRY3L, m@`gђ([\acܱ-x('1Jǵ,~|G(w(!$'r]I_l* _a23Q;WD0ټ;W\v J WW9׻]B\rQ \j֗S*T++N5g߇R}\LX&λ96|3$#v8&8dWF:Π?7-%UO Ê{f_G+W^(9Q(ٽ3tf^@"O 2qbv0qu^8,I ՐkH "FQpk^b8tT34ex mOi~Lm1*=ͽxfF-!x_exjU0;,G'ü`-wF(ڝ1Zն91^! lm`#U!UVmùP[5•Yqz,\74WDe+pUbP9ѦWWdVw` ]r \jv*T++#SV:$бs–&xG;y3?.AZù\#ӫ|ˑdZR E +'`XDw6蟝ߧ1Y;#'h^w뻡7??~yo4µ*3>& 2rA90h SAipJ;kevk(C<Jy"MBo@w^?-yen).mO{s}n =v%1gŜh)s ~%ͨR(g1ES[e-W\jRiԊ칈ʇ<W8 kVkݓ?'6 wjQ] O<ͅZ{oϹٰ3/O#ݿm->w"ayϽurzHz.=xOnvF/"P+Ͷ+J*R >YDqUߢ塁+6J+tVU\qf1pĶ* ~f kʃ' R䢋2wmmKNB Uy8ʾds*/rj)I(٫i I]yJPʃ)B@juho^A/Iθ[B=*3qe7Ju.Ei⭌&\.$|{-$PkW.y|+2=3|(V` &+DdrFȠkD$A9hJ&3H)B*6eD,$U@I4ΨI$U KVljcfa!=r!E}RTuυ2) T+E󩦢^EOj} 5Y{6 ]ԟ|ւGpy:Ji/lճz&1_MXre~ Vjg>?~WOaG.'S܋S9gNi4w@l)]ݙx@w~:͟a;Nv S4/{1p9_/y8)?Pט[͚cF?ɏ//Nx=-R/g鸌N?fVuPĊ6?(S;?G2B#^>UVy<3Xgj1ӛ1O?^{9t+Gb\5wEz+Y}:9.^娞;8. wj]өaYgxzv?ˏ?Կÿ?|@҇~]<9U"cڐKW_%jaz&!EWw{o6y8?0<_hp &\ßtj Oe:ze+ҁRB'Fg&ǧgU٦0X4 P̟;dB2> ۴1d@!MJkJi%:b+`a\Q\,2%[l|LMQ`NNw/Ľ־x7e<\qqywI7>yuUb%4LbMp>/D} gM;T)U`W*ey_K*-¡JRj7JS8+4ڬ.<;n 6˂ 2Dx5lI Wՠ`'(YJthA;؞^w R! ^@ED*!$J2̠٨ d6s6HĒyTP!E3"Rr w:il6gKǟy;z0_g+\Ƨ]8.Nή6[f<.>}E[WӋ/_уmvC]x'g}U7ۇy1d+b Ez6 Rj9k>By 21%5 R$dڢVҰ`h)*Y3UVT 1Q2֙b$ozRАd[Xlߦ7&L A/MKo?Wt2 ˦Ϣ2V7H»>OcCG=tt]F2LJ6 _9C)d ](lB,STLr#?BXG^BABӔ~Jl"J`. (K-oPhjG= * L]cL-#΃䷫7B>y>>;?2k/q|]:yq2ks9٭/Id3j2,YO.)Pkv5R6*6#bT$yr%r oѠ <,%҇bP(K{Rcb$AZ5^XҲ֮Y{,$-@6Ia8&q![!JȈn#=2{TX֮P!H%+l*bb0tϓDљsRk'VlF0[&桠v38Ծo@xthyp)![Ch2XK$Uљ賵X $)X}MhÒҀl!33Ѣʖ&e̒a`iN"Qo}'Lx cWDƈ"XrRQI p(X5p*^!_cQ*m3:ȶq֨cA%T )Dl2ȖD5:1"6g+'?8]d=fZ+.Ƹ(\pq 9DC)WW_." ,B=xuImFݘ)8Cfq(xv=@X4M^7x?QN/h8}fV)hiQ[V4{RVA3|DA3 [Pzze|u~K o_Z9V0PԹ % >hЯ;ߧ\|2&Wk> Dr #) 5PrvdS:`ISJb }9\iXzVu|n}٧>|w8Vh;~/mZ=O.{FR^jŐBG 6QX&H蘆H2 ^\)g].a(Pt-tϬWFƱ$Xxiy@&S麏F #?YDRb5lh JC 65X!jt,c'lIe!n~bK-VmG P&EZ[vaܰ#lėZx]Ft?72.R `H[0D{owN']c-, 0 & f0'6.)!F{6r$J#, n`{.# _eɎv#ʇDawbUQC[VYU)﫵Rfx&^DC'\I 77 xY}UG Οr^Rոb:vĐՏUIt>7JD4ėqn?ky˻^]h+hݓqG\8,a:>^j49% & so|Ey=_hԨ>k$ZzNot }h}Epyp׾&>GÿMQ&:bIpR`_{p͝՜s4뛀bO@2E:9?/e+7x-n9E' X/1ݫ~y T9$%X4-wP.wPRَJV<}HWsճbCM_.7Tϧ`g^5!x4![Tpe:/iAeJXJ3dyiUE^2G z!%c"~KM mz\bi)1qYBm x"(oh|\}ggӓ۞G|&g#BVRϿsw3] @݋w R`wۃ@߳x3z,fɼ%f('cV'݀M_L;Þo 8?o{ane]&%^d&n!+ފdU\,m*]N #B\A s,ߞc@x=5&E$":$-qų ce2t> zC~]by+}+1^j]0JB,|?Ƣ'h= ףbI6j6oȥ.p*Zj.F@$KC.ȥq> N;d;Wv!DàMt4縷xH U׈-0q40?l츍`J)g<$dL*BFR":|1:e%g)aO987j.[$_T?||M, ϯ:or>{g"޷VZ7{ LSbWquMRFQ?ɇH 9s |Y+D(," ;n(,* J 0$eRq4KP@,mKo./7X/r.R"B e-:YuwyK&KCXAT$7ѻ`sQɘ?XTouunV 0ru l{햃lW@lp۵m87?WAvS}B[BFeDr*! ggQH3XA[ s^%rSH٤7((4U^THIm|]so.&S#Kntv]V^@|F=8 hv#,L<M(TZ(PJ Hł*Dˌ G.ЪR(ki83B{rxa8+"6|9բD瓋 4چZ׋ `՝Ӕ틒yx9KW'~_(C[NƿFD[Oeڽz^k|+GzveU4vyW?MO{b|=X-iu(F`Gވ"Z.ٮqr0foX9ёrݝW6mgHFl`׊c)џ_=fӶ~V dY-(߀}`RH&dhAˤMiXUƘg#x(Q@0H3ѥH:"J8].Y~I(Z6HnF,`"Ib$ 5$وU 3!T?@)Co"!"%ѥ0@1F/ ]koDZ+$/)`peڈ &1bqѳzU+#5 5J! ȲRNUnҰkMk?fu @<n ֮F1/Yf#9I`BTRX*tD1cy}fXp9b}+MNV6uk8M<fRm \x #.f@æ dIKre00U8&8dglڂ|A cA1J# I2HeV**S37VY\gy=(f @tlܶQip|A.Gh⬮Ko, $X0aE޲@#܁? )dgZI; 0] #,f, yAmJUΪ庾zˬ Eaek,WH6 "یI'y*0V_ \ F`bAsnu2i:joMv:*L]DPCV  G4T\1 0lsFS]a Iwy.@GjFhn0ih g&%<%2`:39P.3d5HNJdt A(P`AHGdi'O H5loجWɰ$>?=_oa+@1\ƒ <@bzv)E^g<2apaBo((cx)U,JGU2jY:@!E2=B mX)+(bcZ6l!&vh[.u _v)&d,¬@#!2.P,9tR0[`4bMTFշ+z3!z crTsuԫB?n'Z866$&B:4pg^9ׅo@{+rI'dKjf}MG^o:Sj Q (껐.V+%)Hfu}ہk4M`/մ% ~V=g_Du[F7{y\޾`՘6Fɒ0 ,Jk'{lS&u㶛eҳ=~&4!pي83Ƃ`V<`Vf˳=gG_Ύ_?&˿m{#L tzذ^| /it;χ|gm%7pWuMuMuҖXrvV;HԃUː_N*?n/,VZܙ-̓Ld4<;x}mo^{ͷ/^q^=W/YOYIH }&p`0 uy\ ui\oyilȻ^ !וCruoOnQZenI߾翍i?\\۟i3E6 \1/$Fӣ8EgoZU +EB [:*m1{96~DIrV(|­\ZɆoC1k#͜Q,w"Jzan9-^M\ww̅9륪Qmt7˞6IR3jۊ(ᢱCf_2gjl:h%8iN/vy3-:qL0Ol-6O9<;y |GN!;C#B7(U^&:gO= Fc'W.Qp?| x{J]kgt\#LpXg5g5KfJ.9$12(MOy_*ɯT3/ʐ](+(sH8k r.M-93h-\hi2y0wgdq7M'~SP;u[(vK9ߒtM]gzB=`'SZSZi {Jam9fY4r2Lp0^u>g,€dM Q4ſI,*O0^ѧL|E }E=w6"L`ʴEsSkɢCfmUg77:em( WlP7DI[CeCa#KX|:΅S*2\9|VYU+SlJׇP>9wK׻7v9!m!~9K8:.O˧2lNJ7M568g/^dO=uĨ⻃ktޔY f?u=7=j+-=􎓮s|fwBfᭇv:O +#"ܧi/]vv1bgr Y#y%Nnݒ-jI)ewĎŮf=,>CKgnxC{~4;yw7apHGE#\qͶũ#-P*嬘7_[vB9LPz{gx;/cB 1lp}c[;ȭB#st"̭y!GɬSC욎/B⣪B :+@e"o"Ko*2zMO២s΃7i'Z |p59ļ;~nX]@9|Zóg?&^$e^ 4roA@DNJx·"TInL$3ɬ?_-ߧi8NoiyMbq58nh0xc6`m|5wmy7FF_=n sSwߝ|߸l{<8Ѭ=lj EGu#֗ߓ䷘W+NjħwQm^ E,nؖwPB=ҪfPF:Ez ss8说U"V:ROD=q\/tt)&W\`Ec AruXr"2a@pb7X3z"*ˆ BنIi3{& S? 3حaч/b2}\8!frWsg7OD_VQ R*+??;~2˳#257ѐ7`YZ%B5Q e1Tޑg1svx6/__I#znۮov[_VTjvޜѭb(e:pi.>2 +gŮH\ 7<*nF2=9=xm 8'ywhgPs&1W̲$;2Ky%C!4Ndl֯(@(SB4:W xe0|~1:eA{B <%[-]!{RPJSb@ ^J#H0c\PMRI!j`d~kI%-ݥ:{箔-oxbh8{o@*##M j!*$IF,R's+r DXQK(tblwx6KXbX'G=?[%=K?Y$F%2>_Ɂ-|Vn{7U}ע}FM(ׂdL\hOY1q'4+&Dó.%DblGrJ1,,3B HG+gW92~ޤ%_aj7'Tp:{初$$H/-r 8e1Q|"<ĝ&p4$L=,&WhY Ąª6JxK+]wtoQ`[z2hѨTkcr$y>fa5߅p$Z⍤2FTds5b D#b~& a]5blÚԏd2+X싈0"{DhchW`< )14&H@st!d弇 LTVD=ĂX̜xsU{BaKszɾ "qM=}PbND 9r5$rm>PuVV¾CbWX VP(_צ$>)7JWKUwmUA\^y*a9NT!)jך4C& ]gt.IPt(IPJ|w֞ _ޢtipaPV9) :i`_O*Sm4]FSʜl/u `;F-ՖȖ );&Muqx7ؠV6 i92tzґO ڼ䩥1hM1LxB\&qLt1Rp񭡃w&Wnxd2QUHS&1a!O{ !0]GgҡCД,Q<roЖ(R#)l@ݲro;F5: hԩH0])OsRi\RZ %F{ _{ӛYh+O n 7MMrB%}2X(?e"e>Z|Fn)zj;X[9Txj<\"US AdqWB #/"K["Zgyv'1Rxd36RDI"32 %#hC 8![̢xi1d`,P,G-AdX驆Ȝ7Rl'KYP!nQ홓ꌐ׬dەʧLsԓ9V4={!2'%9@uh;owuBkDZKTrE${}1d^xgvТgw?J9fԣn'# K%@<;Mj:81 &=:GZCBSAivJY*6$AQP{>CӊYv`p?φ]|[]cgw)|GQ_fu1foZtڜ0GAx?~㩱"WMƼѪrJxIKIxsӯDl/W0^,~:S?rSP3xu5~O VVSZr 10W|͆?^M\sZi9Mˡ~ yJB?Yd0}DH|͹@RR|&>ޯ'M:>*v"m*ʢ%:?\ӓfF'֢ީjsVMKFܵ:?nՒ_?QPVGMhk^};4oZ1S폙5ԺǬ$>g:GNnkȚ}mk6:Eu? '&gm[fܶ}+7ͳ|wG15`?ZRg`weg,]/5˗^ h @~._?r ዏE'nK.'y>FbHn 7R;.wㆳ`sS5Nit0R~܋$C,ʇ*uk͒tdrY|#نgf)9F2?7 Bu5ΦPdÃbx~`u9wEs2[onOMzt.'9gl\$x޽f6n;o1OnbUL/ ZL/X7Svfi Qbnwg#g}Z=_:7 Sd A:YmqG,KJr.NL~Tc9GZx+<۵9x;%'u4fEWۇt yDHjWr 鬚eW4c\6.ĩ)J-Jjs?8Aꨕ-g/V=vMɱ_0

'Ue ]ipd+x.QN؀K^T3qY1ph`<Ӫ0tgt6 w |blʪ-@U0+fw25vpϽZ[PZJ̰־ւF\,z}pUŵPJ;KiWow3=s(pUu{WUJ?dEBw#kpJUrx&[!KV_`3/bqQUV}drmn9`գFdq+G=JZ^ >N}>+;ծS/CX`@pUE8}*Wo`  \(ġUVwU4n7W-ɻ{0p҇WUZKi@"\-i"H#Ꮴy'TZ:SQq zmz>OR=Zxu7I7cym=~tnf A Pӯ7;7r*gaF׉: E4Z@JS BI'ͬv٧VDsONVhh'{=˦d@d%M47E2>a1}%ݗ?~;*m.҇oi+O;:*wz"M% u`5KH=-EK4Chj1{>`=٘P9jla'=jhb%:eI Y)SF@d$Ii2eGJa Jڰ1hlMQ(0dRGJ!F/TEyW.$CM.*sÊٲO?%/Strqi&QMEk0*I%C4[D01! H)( 귳L<-e (>>ɚ7F90)m4"H E 7q6j>lz^I}f쩍: e>U{9=ɿUe&pa<7;?Si}{l{|Eblz%ZoM_wEjG:\0R%gjw{Pʛ{n9&i2BRI@P2Gщ bjrisQ|Vp"EWJr],hzr]LX JU T؛8?_7,M3v› ,<,zՖj vyy_Єxz9>}n/إd+^z(Hى Qb`MXtϤSfARdٮ1**ljbB@Nyrg\Dr3b&fĎ Q}A޴c/P{Aڃ{[oWJfB 6KR6m ٹXkB%XCa1%h$0CfϰF礳c,UYa82F,m1$B{g3i}AcWD=#q@}^iكQ"=!FaGN;FXuBIY,+"zf:`n huRU*D !lSb&$jgDM͈xyy#Y>u%Հ.>T"eޛ OTbRڡ2&CS3g4={.pq_7<Ի3@uc?SFn imtG:c~5oz[~zvo\j{ID8kQX:zlӥHEjtN|8 z( ^ߞ׿䵴)f9cjGiuZiz2ZyյdXrwoCn{tMrn~:^N:hNg  lE]-:sgkҍ<ώ]]'|ߖnAxsx?L^]Jt~`Larh2Nt_|~d.& LD1jr֒tA_% o',Ix;^"m\Ȉ6dC6Y$D1i06!D%o<1 IIc+($%#b**FV;/q6ann =!#=ݕ~gqLtVy\Wr1V / /1r|U/u|كK<&x-LeLilׯD!SSwoi)襃#0ޱEX BAB1(I"*YB ~*PY S -1R,˪la4*uH* 2ٲ,Ԝͦ}JO^ߟN/VBܽfUz]Ox5|q\LrB|&izX Kk-Srq7Jsygcb[ľŶT޸ҥR.l><ƨ2bNUK+Wj*^EWR Q{S02虓Qz|?:=?Hiןmrh J);i߳o_ f>TJclB*)&Sҧ蜌ݔ9_WG|8QpY7ݭ9 m0b'x5O<&20 cpQ<" 6@6<2YDRQqF`YR4!}%@Z"l:>MvA^k- JrliJmxM,ҵY:9I7ygt潑5^IxoJҺEBy_C7P(9aC"lLڠ2&R43@'y5dKY9̉e!/r%쑥h-%Զ6MLc,LTE*"3Ump^ %S96X ;|5 +P\i,aS8Ib IJX#L[BWB`2)C҅F/)wV1kXj5e)2]NېHٜ@[[75۟.jX;,ZJs^tDm6WD%֊ Xiw0J V{*SC7X(x%{BA9=?h FH5325/J// &@&f M`MvZ!)@vzY/OgȞvH&2FlDVhj0Xɡh!-򷔗b|4> @o)zOV iJ Z&JGYΈ}}uŽcl;[ZCmG-$Y*aczMq%3_)5vno\2:d*i!<}z1i֜=oEjx/,4Ӏ$ [Z X7.)Fơ3/JB/UBmdF]M~>=:!kdc OHn '>Ƨ9꼇{z&w\=POv~rih($da `/9Z@8.MT+FR.NQym`ؽm  2`_K&_ϬU.VH: ,LDoZFD.A1 uӵ̅ʚtQȔ*dH} M-;[ҋ9{du2;-mVx&3hοr:XcjR͐c2ʐe*LqOJTr <鄞y61XK=Ut^s1bhvo+bbnd=S'D¦)ګy<]i\Vacpe8u"lM21Nj##27 XUI=Srt7˸1Xo6чPMB=)ٝ?BȞ>@ꤻCf>MW*Rf.aم 8ge6Y'u{C&Qq҈2Yi- qߥ!yot ɕ~ wi :=.^} g?}͏u 1h~{ \&Om#Qj|~~JFKV@wR6U ܵq>'1|яh_Ǖ~Q+Gpqa8w}ɍ m<-F I534C wMZ"מ`A_ĀgLSʱщ樦o5<9|n;s&'/XN2JRE0*Inx}(G VKz^Z.^0ٓV^׶6.цNlL0t)VPJ|٤T8ь!Vdυk8  {d'.d 4YBG sA&s%ÖX 7XXg#Ŗ%7d-cu:c ~]@êё-{lair RSGHBuLzk a-'ܺyUJ]U"Spf~F"ngb{zKIw,X̟/XSb$u .t-[ I\,6?RbtQ_:tWUp7.x)28CF %(@K,H TIO0ոE؁^\ <2x=u2݀RK ԁt= LܠΥb:f4 ׶w;_%P0 Vtb8P& -,l@$UX p H}RA%B\tt~d𹸫;SPI5J0/&?ͧ,vUq Dz;qBEÊ< ç[8+ 5ﳅUxf'?܎^ g̵^Yg#;wQ __2BChyK!%ƗtU~AQ0e4dG6ΕqrJV'\ꪾ2ڴQz&;x篃KUߎbnqg]Jk*&ޕRq7W^>?Lԇ?p|#0* =ۉ{2coiZ(+$M^6˕t g%$틝Z^윔w{mkbGvZw;ѻ};'/Nd״Djmcä./J *aS4H$}v[pa 1Z]pV4QJE^mG׃~:8_|5*`?B?Ъ@f!4 c2@,Zlp>+8S.].Jc+0ԁґ: ` &gvhUtgu8+y:,O.l@r,bY._ʏo(zk۟40x |//op| P5x`5fwM tEjuvz͉൛\N5}8n4-Yٕ~ɹ`1X>b Ky$"kY"Mc]2on(mm]X9 1:]Fu[cR;ĬW逰 9HysQ 8c]FnFw14I*Bn{8?lz$Ugc(>Q' \|j˹y|жew{]r="$3j`$ܫ92T.9ͱޒcQjS ތ9yi(@4 DƜ(Esfͭ$G!tLZYl 5=_18v-6EP\"#,Xp繗2 "`j(Pʍ@Qj-Ƹ#au :)$Yf68 FY+HKk;^eW׷=x|^Kv%u+*EH8 ZC1J)iŜÄ [ruj-k>CkcKL1 VaLcyhh4zBes@J(jq 2t#QHΤ;;HDk1&Z iY Ά.%7خX]af monqq:ڊT8u5Xn4y/xQgcSx7NCZ=ZhoLOQQ:d"zJYyYz}0#N<k׏fQ!Lijn^}PWNT]mRaqZt%+ծSߗ`ECWWUBKUBtKWϐ(B41tp5UBOFgHW iQ cCW .VM2Prs+B5-9yOl\Zy"{9gQmL&Fr2/(~`\M=;<8N#)g޽ߌCw{eFbg#vejLX K1UA?z~z/س~j;c0t \mc~\A&K26W{j 5{Ӌpc.`%B6EHh}=PV%z2&ѕHMy?2Jh%9uJ(u=GXJԤ7X{Uۡ'OWRa3+E0Atk7u~BKO^L( |[z>tI Jn ]\PS*%*dԆS:(|G[&GR$KWۡ=P3T[ЕjjשǔPDW x&EW!JhOʹ-]=JB +h ]%Z7f*<5Gߖ(Ռ~ G*e)tgJ(UKWϒs`7" X"(W_"~'PZ͊pîı<˾}~ :H%S>\^3lU QHI<* 81<|FI>ݤT骰bz0z| [Lyߌ0e/*]d9;?[j3\vG%9~hAO>#Lϝ'AVH]pA] -Jǖ-͈A2*4FFMp1j}2jBIQFecA |_&nu!a:SseXA/L3(/pe,:u~?%p#<_6:CR+R3>)^3+vfvty LqY̙C:<Ncʠe9g61΍4.m͹ p\1_I ^I^9;9=K]?۔|ۼٗk/gY&aVM=FiZ-:_;[W)5 JJVNz E傱xҪ3J!;Yv%Ȍ0<x(6%F6ܚ= }Vrg*YjW䪈6B$:Œ>j$ɟN2G2V/ӴG~/!k;?#4w'uFh mgP|!w {g;w._Ru޽P)d4ٸX)u2x*I+筭d *hU}(4.='mzZ%*S[ IĭBhXơoqN[+@C{/L HNbp5(0ʘF|c@glTH{>2T԰^?~hR6G "xu;ۨmGnuuqˠ#%\9:\#~G>^vS1-+HXXy-l6z0N3nD3n^/y5.Ʋ|lRY9Z^ U0ǃ3fG!! ޛ}m{:׳na7ZJlr4N(/Z%d ;n ,kg|bPGKQ W\zkrp;Ye jf՝*02 I!`9,ko#"\+IJ&g/Gޱb茜-\h{ ^wVo^ӇEu49)ݭbs7E)b 5lyeUCL?ak|2&g YAB`2fK}gR{#ކ-2-80QE][v0԰#/uċ/nTIr5&pF`1ZФڀ;PE&fcA؛;zDz|)}lA})f3`b&'G?N.Jˏ3ivkIN+&3Gs v?oϚpL4]?KswKld\fzcT]D%b, %&c"44MѯW /.8Yf2B @%/:VƥmF\r kDM˭yMV@8\>^KJ{SR ,ҏ }tNJ:p ;ׁU@2ml2"q=ʲ:co~ uMS?FB'֣٪ojrwz-Zymݫcm:>j&~ֽ]7[tZ9$Sۺz4뒱|Rgu\\m5alD۵5+77w)7W/"ͼ,`j-}-&D6z7LvkWōADfi~qDj?Yor-;tvNo(v ^Wqn@n;R9z}ߟԟ~RJ}bZWMeD_鈟vpړ]uU}VEI'my갽α}O9arlCG9'H{)sĐ24}rr/e;Wr>>_PL TĤ E9b(u*i,w:V?u- [݅ gTth7-\m Ξf1I`sU(fƕ *HFoxXP B~'lߒT$h])~'CZ_eW;_Ff!IQF.YF$&@G0)$ 5Aq5A86^}"}H߃-VGش9nLZh1dAG9j3{Y*`Fy%'ؼQwV:2F@N%:7 I2cAz*:kJ-K2= =*@ax/{<6j$SS5᥅u!9TNSQ `|$ VIim~Ч۳_yj5[7 c1[5 sqZB#8,cNKP&3ʽ u+HI1` {RqS@§h-:<:Iy6?ÕGkkKY Ab8Eߓ=W܍'rUir&%(8}L;*^$*.;)4.KL1'lV0X2{Q ?8g'F ",ѧٜAjqoj$s:2czrA&?ф "b'ug燉/Iy`j_ipkU%ٯVX<: E]\T*ew nӈ΃I n4~Ak)-!iz|;-ZH;7\OޮĐ3q8Σi3슣mdK=W#itHRm> fUޑbOjE3$8>-'zv3l8pMQ9Nt}1##~04{!uЩ&$+qx~J~xwۯ|oooh  =bh4rhaC7{]=b·Wt-nج6hzeH1~=ŲiyT]My*A~\~&1?kA\tq'ق*F3U% (_מ/mk:V7_yl61/*O#{("H1e9 .%Co(}nRUg}l{# DmFin3 9eMLR`)dj;lU2xbddbdOw5U} Ӌ0RK?l[r\M)-2j>=b1*+)V(YEkWqUgM.᷏nfA|΃!ILK<% k"IC!S ΡfFTCZqcHnf6:M7)k`4) Wi*&=Xݖ_mօnJů?cVX9*d{oR2IZ51brFm8LVCՌJ»^eo-9P6y (gM?5nhw̛y!2G| V7| \}Kh |.߂R sҜ ~{u3/ݬqDiXըCp(p7yѰL{;8av¬R sEѳN@礸% <0 {"IB#}Z 'ȢqR).T.H!PʛluT,pmI:~ #ӲW ~;> 1[b U1-g4{v~lؚkoBeuk{3Hn# @xZКtlbPsNӡgLok-Sq/ #<WHK,yNnER9.PY CBbSrcp,Fk2'PetVymЕ5jpB.y-=OQ[.A5vJ_[fm4flae6Ov]͢.=M/wÛ6iԶA6+Uۻ͍+K`Я]EzyYdO W4]Hny6w^w{X/UWWb)<4wyNc l!9#` F@DځdO%/$ن2J]*5D OBbT*zcF:Z ~AM izԅVɑTVqs=٫窬4SFLH \1""JL MYSTeF&rP9CYrAԓT&fEt)bLkHj3c58R %6paicь\Y|xze׳탿xqדW+gMЬb8DI%.BdN[R|2eHnfF\( HH#kI6&5D#l4l;F$&:Y0X;R'abvN vˊV^.OF8čAddH1>r5qњwWì4!8L!z[Ăޒ&1z3fY :*a585W9 1vcWfD>0;uVHLɂ=ۈ h$C+%"cAhu# -YL 8)fmD.<mȄ!%-Hg8C}ċUR|Vɱ(+xq]+H>h)Zkcf1;9YA0H,I\su;TlFUYSxqɋ]j+|(PX^0]ApCD?rգ[ј-#o},<51(0Ӏ݀۠&*gM(r@`ݾw ݝlT4g[S[c 쀕h4j@*m"HK.`+Lm(j~Ue.Udg^C)dǢ *J+ro :,-rKy9L,h/cAD2ٴya &zy3̾ ,]kag(夼6luDj-#&cBj[9L1\wlZhm[UJxdt$:DUAD[_vd,+m78UԿRХ!io$L@-Uv~%oms/䘓'˖J;\Uٚem/۠9ͻ~WOr$l㕷Moה_UVR (@۷28˓6BL$9sYF˸VZ, ƆBu ~^AUGӥӻ~[~QlY>o|ޕڄa,q?z/adz5 Rb+q_g 5k0Ј!C!bC:fq Q jZ}`LA|tWĘ{:u5; ĒTܑ;r'ΚR湣!3DѲ^h_+B*rvK`7 \R*=u+(Jŝa82~NV0/IVei5MY *$u;D`-`-wZV1іRc咗& 1c <촮j-Ze$Ґ@eHYr1,!BgbS-P&,{,|[Zw/V&Cn;,V:{hVqlVxNc\"qDž2$F$I6hKK9W&i}FAU*r=zgu"Z:ɪl3- QO.W ǥJ658v]s}ئX{nW qea70#Т;PG9:¬f E]Y͙}+B˕:]Rdžtj8i-qz`^գ"3/CWC)t>@WN=ȵ]` `-"UAQ*( JnOڕ0)*p ]Zd骠0:F1'"\@o*hu骠48Sϣ˥ܴw6NBSEf˓N0u%m`觿}t2_<i4irnfF/_r~AV H+v8n;>qg߁^(~ Z cvG&h2N[ӎ+n<ͯ~mɵҠ Rf>ΏK ~k|-řKƩ?'%#`az揣ٯW77ej-ϰJ޴~zc#Qj"$5 (JVS5Ԩ{c`7:F J:FA{[NO1 J;c($*mL"Uj|u*(5 JOtEq~E:]J2:C*&]ГU *hM d(\ao5>"Bv>XTZO'Z֛裂Vt^*(jJæ^ fxVcAz$`tHe2=VldفzL#"7tU>kއ'ѕ`qu*(A tut%Kq Pӿɇqb,-EBo)|O?HѦWZi"Rs)4Zf!F̛R5N@bp,!}_i]nC3OWӒd'qgf*M}Yk.5 OOX&#MNڟ87{왂EN~ki5)iϕYX2 57wi>Gڟs9ӫOŔt}n76el>dZo+/=Eݓ%[|W*ie/u9_RǛ{RX M4'zfCڪ s9< 0Ɩycty]QoW|:K8-k䒿I#].ф]Pr>ghFK!Pn8z\2OͬR&m4F0k]Af;j˻bThE0yNl7y,YNe]Vhws^+6+T$Q3?_.Q\larrcyu؍G+RHS^ﭥf=Km_1j&Z ZiW.oӢd2oҍQjGzGo?]&tPB;-i+7tE,WAjX_̯ PX}~~8SݐJcdw{! _Ws{{+zuޣQws9__@k^kY=J*JD j-5FK$݃}[fnҳRS8J}t+P5%D%J֖NSqrC9!}VD|K^/0`7Lf}9`"9:zTPagyI-UEԈK.# ߖ19{uΟJg6tk5E= Ul}fʻ!IrqUV 8מ60$h FdXpz(C)5ZldcRƈ(fON\ tg l8;GYsE.n 3 {re \}VۮlV>GzDWvU ]NW%ꁮΐPH}+ukUzR=uUAv3++ >]]%G \ʖ¹ZP J3c*ƣ*4?~LnFk}ٻ6$wnkql]74#))UdRE;5fL_TUR7++_W#^n}:S ?>KC[AV | sryA5ayun<%޷YZb}tI|o͛7 r9#YH1h\c D-#HJ_ןˑoGXrj7c&n\_w5 {~{gyZf)f􇪞߬-n ?3u-*wD:Yn׳HeөVV|h|?m[KLi L-wK fBzX|T(KmL!Gh6%Ԥ[1<7Φ颹2DFQ7uCb^bR)AxqFG6BmlRH>BA+վ8U)sYޟ?UJmgk.X!VK,I#S翻o=r2aNuauҿ/˜6{[~Zqq$^%'˜*܉F-{WԤ=]Y D)'@}AAcIKGQh-E@BUɡrYZ ݩ SEPAE(RmMbױt&S }MMk唌%xR XNDJ.%P'B) Ƚ<Ԗ!_Ԝl:>F~FgYw7Ƞ^I/S]H1-xBw+p{Ӏ㲈O+[e3 #J'A7`,(C%`>&utj7N/q~Ǐo3ʳ'Gz0#1# U7 G夝qV0W~!;%=ҩѻ.fg $s i2:2٦KM`B{>jaR]<`K\<[)iQ>xZg\e:Y^@gMnϽ?{gzsT x{5nϻn=xW)Wle};G> BFEWw͝sm2=]wߎw}Z| -m_NDZ;}q[\q*{ݽٽ%wOiuS|$Փv;#GY@jb\_?By>VB|x6|8vݡ;KrEoVsާfvabWpƖ[J٘ @HUQ [mNodtv:tj)H["D .3Pb!0e,T{79lT =رw&a ɳ;=˾<;־zqOzgB}ޅ}`V>Iڠ^ZiRnlD|Bu~F[ ei8nVu*Eti 6$@" 3@.ǿ/; z=ޣd>e$xH9r)Z+C*l^VH'SJt={- D "ԑ1Y"/RXصltc3qzOƕިQvݺB|\Ꮹs$6ˢNXuRIVEI`2G(1-Fmab6ш.JPBʶVOQ% HV颥k~džڙ8 5nT3c/:Ϛ8`[xqs*NɑNrb:%Z=Aj9[9\;0R8jl2>qT>CI{#sP$LŀdE'-Bi0"("_JV9''H`=a&(Šȼ`a;02vU:4(,d$z,p#QeegEZl7zɻ/hp<^Gإ/&QP Pɐ@(1vhD-cSfAR{BǨi3U _ط| WR."o 9]stB[.橠vgqGxl+%W)-VKE{~T<)XVJߖt5Er*+H#co`i%ةNaLx:lD bgq,"ʎQ#CPrF;(S =8̀32$䉄HA鶫S`Q8o7hR7T첓^gH=ieF/MD:FyDW̏:2.ΛqR3-9uǸz\q!FP + Z'SLJ, IDB}F1Sqqiǩ>6Yaǚ];G~?6b}8 O~dm$ϳmf5ٞb扤1ADLɐ2Ъd5\9$Aӯ'obdΠi\phQde`eE3vK,WIxUt) 9AYDR@ʷ3B ,fR몿yK&B]LǣtkA=`w'DY(BmREdN x3ѱפo 9RSB&3E6YQH[T!)(J:`)L6˷474kPUk+\`"((ILXʄŒ=h@%vկsQR ky0kMڻ \M5۟.j"ml<_jG-mհ>B@2϶Ȧ hRmk.J@fcJ1F&f&:c T}QǬ-UctT7+{(-B0TA58 D}1{S!@Ig/ #Į 0H2 O/7|`;O⬏k-ػl;u.wF{{py;=N=;zv$fsv44[Njc"]~Bk† i\A IR[Z |Г?apr?fQzϻsufAX$R%Vf׾:Ds/$mS `̜ uzw^;n'/>ƣ'zub>@o'uo~.I 7tIjS|I>RnbubM @,9:Fx&* {AԥGgA+5>X? 0HhPL%.CfΠrqzT8EI.Q. IJԗ5ZG!j,*(}dJZ4Sku&#Bh =F>gO|jL>6mn ;־_"۬\U.ޞ#4:kkw)`fqAeHHO%@$,Q9R+z9x9гzzҐJl(%Df9胒9bE0+bbvl 1 Ti\vJNg/$eZS⍏Z;ϱ^j+#soR%A$/TBoYI&ZѷzZ.C%Cz)ڳ?AȎR ^ xuЇCf>M*RfOC`1DKKtCdkTKQ&'Cm<ƏsC> kvpm;i>`~Erpkno<`:&lA2S:B0uO,,Q7&ZMN ~\&UY\ d4^΃[?[gkmH2r8F?-)!)+b>!)r( 1`4{*{*W^ p%9 $ KQ׿̾WқuZy=GE?w:?uDpi(ʆ>JߥX 1P&h E'ŽE```Դ"2 g/jiWfdFWo qqVUSbZE}އAۭFPRGU/=h)XSLb0 ~:"z\WVV3r)8ܦhպuMDy \VOSO˜-T2X^ROjSay] l1crҥWT=43ާq~-XDCfѿk45A )!Mdn#j,Ȱ0d[ӛ11>3&YRYlw~u_"~Ǻ?(\}?OI\ X UV-1%Ns Wn͠7'bLs1sj-8sD2F[KU0)V΀lfhXFY; ZDl  gq_H ÌМ34%>!^ni`hՖX~ݦ)> DNr2c. Z1^}"JP򱸁c|7Ko O0N!p>υk = -u@vYBG s sX`XKa x::)4.Y[FŖ"@·ZnVgle:?)e)8KebJ Ou# 1 ° X[NX/؅uUږ2D]DsvKg>.n,ZX2ɀSpq_WDH_xQ|虊z$37#\w a)fG8i>3㤤1)B2(HV܃Pb^gn¿Wa,MTv|PP # m0vgVhUtV/[*.y>,nJ9@`C,tC~(?*/'?ٖg ^iV&cLoƃI|/e3.nf^eד#_^d0x.*X?_@/ 6j+TSo@Y (Z} [k0sBJDcD5p?gγF`޼5srDZFr[.@#.@ֽE]r&;h,GfnMc e & ¼&B"p TQ͍QmQ'(<ެI|:|l`v%&@y\fG}Rr׼45`)S \b$q9\xؤiހ9eڝgܜ$K;9Uct)28CF %`28x`qprE؄ A2 ǨU1kQR%loP=^uOgOaMڏ_{ҚײRMg)ewe&R v;R (-91D=Hrb`X4Xs/|:RaHitL*؊SHaQiƝV28X V뱘P).6 Bn'G+OXj^E1NzJ@ D2$*inອ鱶n>` x FL ' QFba.JP]H\mW#[ex8&I5I 8@4SW _nߒ 89uϹ!}gy$Arn%_0͆Asa@d'1Bi9LFƅ\&ÂaDX~q&(]!Q>d`GyѰj#y~&]`@r SD.Fb.dmb25x}:U`nE- kDRb\\sd}eL K,>GI|&GsFP%(Ⱦrq(kxH8 O!hprc +a~ 0JI*œ{ ~-T?O^L,€\k{oOWR% LAIt< CW - DmKon麮R,_ofye`X(7ɢuos%#V ꬓu!Nc=VHA ñKUL@{,y_ K=- $T*`NRV8FW0?}>~#&㛿ku#0 6:0>Z!?дjiho4Ul jՀ.m5_>R/Ya|g-moRL݌8({fy[4c3}Će~]u&X,,T Q  QpT¬bII0CHEI0e4Iq-`Y:H "GV[0v8T^^?bk9i{KDY!mb:x1X L8+&F(8u9i^cs@=0=FӝtkNA<`U4;B^O[\E(qk8L0˧Q?6ңYGH(򹖜N)bxЄIo Wx5 @ StP0vY4a#A2sVk/;^s%"c}pqjYMbG&H1Z?_E2M ݆?L gWaM\ oAP,[6 rzݫ//σ$7v; Vӯ?zBmaN98O%(Va] ޺v "B'T.?iפhj:Rjsfȵ1O$&zG:pƉ4ªաeDYۧ]@Zl;DHK+VƂi?Ϳ6A9sY04\`Dd:1`k(+s4t-{< +&N ac'RH(<`I/9>0SȰ t#]ى ⌖o׏f|WCqasN=Fj"Q [*FєS.]CBo_ڝUo"G ]1o6ξy`7軗4͞In0.a7@as͙A6ƒEiCΪ9تh>U#X5[zּUc1p,(5A3 ,Zl<`DfGX(s/Łae"`FsL(QmHQj-Ƹ#au)$00jLh] }jU^dmsƴ|}uvxcGf`6oy"O~*V\+ 2+*E ƈ -tcJ:sܭHUhtP?,kcKL1 VaLcyhh4zBes%Z8,' !x$ )Dbj3g@ZF BahnD4g;YgcPt|WY*rqMvý6Z+kԤLuy3!jQu _p4-QBnmqI@h{թjrb_ƫP:ES,zdG2;zvhBJlݿm{EݕW)\y07{V wYyꎊ+:5gi쿵^o@=ϊ{QX`2i2Gfe ˖[g?mNѳI=uʲ%in{$%]g`RM8#urĕ\HPUN]=Cuň$| t .0T+N,|a`%엟|Ӳh~|wxqQesɈrۇQiX(v6jĕNrv5SQM3&H%sڳx;}pЌ %a T (H z{_,15WMkLYO|M>F#:N y嶨&W:d?~W{8G 86ƃU{-H)g (g|<8 }"ͩ$-iiSWP]IN<'3):uegH vuTSWP]).:'+ lK1:u*Iҩ磮4ׄ3RWI`F]%q]]uG_"{zr*乨+"¸*II:uDqyF J|.*IZ\t+FV[vX/$p5H^Ey@Kj䴱'~p>˹+nqŧt,Ec a:93Evpd䓇ݯZۿ3ouR7k23tny$XLjA9[>-S3x!ќarmeBJ)Y:O|Af5E YgƑ$׿B,|J_ >x/(whRwF%9k:!@~EVJf3V3uU(mVg.nCt" ,&)\EJQtut#[+ ѕ¥,PrJQ}tb~K ])\1[+E͡ӕ sg1UL&6BDlgߦ¥,P|t(ͣ6B)ఝbPF ]pt(\?qI~q?B|? 6]텒yy]Ee† n.oK僽, '6CW Wh+thӕ aҕc]p2ע+V h#C+EɓlDkc:1FO5^IЂq<=-x'vէ?-O^g[nFc(\ [FYn/UY[L(UIg. A䋋˚or_Q?|Q4~x]+s^_n״K؊Ϳ}VͧOO6!U۷GՔM~&wQ{3On[sD(~og+YUQGy"ݲ03p@ϋ}uy}fG3g /3;4!GzYKlҾ|O[ep2:VRV5jwqt[vl!_x6SaOջc&_)O1[|$ #>]Uk$WW7 /h}?}+ۑhQpK0%eǶz(B.Y#\ϏdK.BiBR&7fu!lmlF2i>~ƎRgF:2_kk0 a&J-KPGwa BNzhst]D+).k bzo%.b0thFKaFy8'W|(/EFNe@}|IKc8W[piv@I$N1#%I>'Xz B8hR@ IjzL-xoHDY{KB4gFm 7=7ʰXF4ن06[*e`03)M~B11ИUs#QYGoMC#FC)$(^7Lg|~"MӉr%[z T,Jv>"P%}z:o.B Ue-Gps#L mh dk~zsYJqM>@{"Bdʝ%Ǟ\1c##[$33dT&Άk|`5 jkKЕDDHR8f}VZ_Zh4YDxռXVbZȑ*Pwr=<8T``J"s[k ],)";D{|֑]Z0]Ț/ E*U%щO\b # h**u(:<~5,dF:±e60|-K`1wuU6PŅ̆) nAhJ}/sQF@HU3"X=.iݘȆRC1Tkt d{zC8mK` Ú` WH%cHT " +ʄ+Қ`AV 껡L w@ez7HQc9mF2YK! #蹧%ٕ/R uW4' e2a[̷.QҚ`BB @ J! /5 |u؂@x`1g#~ n11uKH `t &MpPgN42D <l 3q_ b7[Z(SѝE4GRF[d՞@ Q 2 }#uT `+_C0q3^UuAT"Fzm>y 1f^$lHH>0!5P]`3d, cD`m+V&{M^23.=J/wbA\f IHN ͧcEUI=Ӊ8!a%G< }gM:e\4?ׂ`o3?8uWw Fm3bka&a=Cw /- TtU2GNR$];VV 2j i(yG a (dU( {Ay[ɐ$RQd"5A C`JtiѺ<=K`r@!Y#euU;Bp XTuaU% 9աk=!V1bxaQMZuY 1H_kcwռȓB,U'XOXk3tdDHcc.)rŪuԆHT&Qw}THGQxAPA~,ZZ5Kq[S@hՎ a<Qy@'^oBZ]+*brjFF^<-Qġ/pFJ"(يG]c "tV% L5PZ|MC@;H 5\oʰ ǂpN"fcrRZ(U4ݿtt՞Ewv4T–@ jVMáJk2魷f:x$T"XH$N h4L%C4˾, F}+%@X}8ݦ>|9ke>Z1 BttsYΞf מ5`[˿{B{gSCŨծ[sz5DvԌ<R 2j31r+@9p Mʌ4MväDy ؓZN!F +NJdtLv R K;P<<XoݬŰS6Ձ'ۅ+IēN96HP'w b!'!寺CE7 F-%CaE5FHšb$Ucb%V][s [` j hN[hcb=vg,kIVzPi@H=(L}+Z='5hg=kPqac|ƛk΁ >wa*C*mP zxi ֚6- 7.4(}!Z)AH@5WfӢ׸B%aPJ↑ ψ4Jäe1[.Uen4& XTdbh&")ԅN V3f !KkBE9VE0"坴$1 cV/^ryysvoCN!tөXO~J5g_}Bq~z]A()y>vIǧ 1[~7 ѽx?N.kx rvrI ^^^I>?5T8=oͧWc7[{ ͇ׯɆW,A4'Vx\;\\?|𠆳Igۺ;/Bߜ/][ߜ0~m &Q &Z g΄7*ߒnCEK'=8x㢢|pXtXNNNNNNNNNNNNNNNNNNNNNNN:^'@]oL ı'jz^os?;@ @ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ tN 0@v@f@b:x'Pr#XT`Y`]$$_f8H IIv{oղD(۬evTWթsoש.Nsti%)N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N*N:_'&]r#;N +Eg@X@ dp~+Nsta'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'Pq'PqzFjk/#\jקł!8hv P!3%+mWKVFb\:WM/ JCt Bs9'Tm v.կ5< y]gg7#/y޿߿MƳXFiZQ٣m 87jz({oFя?cfv/#4}o< ~n .8pGv rYJS5v dψj6Žu{rLWQ@s77𯅨_vEز9^aI F-i, Y 0-:kKӺ;2h_7\c J%8C(K/+DT QV JSi]`Ig v'hl;]!J[9ҕ.+B3m4Αf,*ם+HW њeQW]^]l81]ԹЪԦ]t+UЦ74xtwy&]+@j;]!J. ]!]1.H ^J`*CWWuF]Zl Q6:P;DW՗nsr 2vB:C]]Mh>'}/Mz=-(]iCvm҅ψ :V!t:&0UYK:TTf(#ZeE Vt]!Z-NWRHΥ]`q-+;jvB,3?GRYb:DWXw'wp ]YeYtwlz}AD,N~p%=-]V(վʶm@+]Ц _!BVu(tm+@)))tutŔv3tpeg vBWgHW\Yct [e;CWרׄBWgHWx&(Oȅ6S5hۣBF[^UYݝy5Ch +ig4 JE8C!RuVy>pOm !t(yy>vtS.+,Ew}m^DW g@Hy>vt`-Ǣ+k:CWV?()tuteK1wnw#ZcNW6U|Zx 듿n?3th k;]JSK~CtevlzsA >u}?^}ZF(= ]퇒L]=:v+kLWa۶%cΐ8oK :CWW=˅Bi -tut%(Wva+ ]!ZzB)LU؊{s2⎚({[X7S1Ii{[ZJFh 2sk/֠.cV6J+jK˶KL(iuCR]+]~* ȘIRLI;GZCU֩4uJ;ke!f2QӠRު(^ Q/})ûV5C?ߍl2CU^x|.|!>j3)tZݽ[HT>mk`, Tv<+'Ԑ(u\>dc/za5EWt{QKO\jYW!哟Vwu7 W|x>@4spuOnHYݡ2DJ$+ϙ%Lo {9/y]?A|aP?L8.WgaXv:/y7R [O\?oZ =eW|ъt#Pϓ4oQj_jyS*)LոyIY@9i\Pъ"9&%>pɻ(19!* !8@MS8TwT:OO\&`"#F'0Km RCteTg {QvߟG5p"(H_,۷?.h: #d#QQ<S%CQNƁDUq))\K@J6 2[BHZBfkôڑܜb| NpT[pbٜ >RqsXڐ|>('8 Q+ $8t(˘Rژ(&A~Ԝ Q>piQHBÍw!pTLLZ :GmiDq\i҂ ^9h\cD$Pw~> ny/%FCwYOE)ǹΎD)S=O^JcDEtX *q$=v$!HPGM1Gm7̥31hLAGͱG[BR,m,VKM!d" LHFQnU@@Q)tt6 ,GRfekѠyTʿh#s*#R2NBॼ Qer,ԗ], Z,%^""`VA^ՏnƓ hxjz0rޮKcjsWف7Ns7f5y#c?/FA%*h`1'G(^`C OܲyZᲧ^1OeEHn=WG J,څ+b)_}]?c#'[9kD%\)G8_!:,ϖOfFQHAbŬV Dmc4hv)uTM&"eNef+ST1'Xr'j"HebHf{dy48;xoD|}y3-@1 gHÝH𤸍4Ym#8X*7YV⦺$+3f&0ѫր7(`P#y~?qiXgWY)qE,OYfC~W(3~WYW֊]%m>p+~׳*tYu-3ߌG̅E@n넑pkѤ,K5zch!dۍ ̶lPB&S ԟ :8^M;j3sv;*Z̢iq]r ۝=;۝ƣNA:|ST޽<*Sr*TNhQTY$䫢kֻUXEYVQ z(HKZ*r\cajٍbfXجg셅( ^[^xXay{__ţۣGR\]EGe#j9!J"o/*F{pcҩbC2UlW6e3u s;I@:ؒ|]@9ٍi~(XPYj7xLvͥr2&$m ^ bTl\*y%^ RS<,Ʋ%̐92wˊc VrbPd&bfÞo˗V` "6""4FDqB[cT(he'Ux7L)v!{_bBdi}i! 4sc2ZA(ցBe80eɒLZjHcDlfT^ĸ\g^/.Ƹ('\p/G;L,X`$H9R#`L4'\XY  a' IU!- nJHZ:DžJpi㍪~@Ŕb s>R&YG$G$y&&u:*umFkF"Et`6_Q͚ibI?QF " 4lmtAغ y/Lk˖z %ʜ;,.k"0Usij& fNH2uD TxU؃윐0,&'vrAF)*|C.(ڜP6IMkopAB,EQO`o~n Cvz#l&}QK!ޕKS0;h\VWSFZJu2^a2)IsRumI&Fc#/#E.1k@xyotnDRHF kkj%_) XVxon@D þDNK) [ &cO72gwdu$[O_؏ӏ/e?y[Z߾i<M}(츻DB׾7((&LΖh:}K S0*$)$ߧ(.Vc{(e x%2{e ꦪQN `9{Ģt2flz>)BoAaȅ\x^wD/6ll7^T_񒮳Q]Х&S}%'kNpHd 9P^xPeBAڗ7xLXb hޗ9;7yIłqNjVD@mGfDjȺCTU7JZ%]h¬dbn]9wBc0f (ɷJϖ׻]h67Pb_q{3ۇ-|㳁s|m7o/?q*ZcnR!eA)8 7ȐKW <5N3=|jcXJ #y"\@ :(FqxQ{s#:(HvM0&i<5\WSr㋏O${Wi21pBTm.Y'KJ` 7Q&Z{gZ ]C vAAQkfwvF6ZxuE7'>³OO?f9Ԥ7=fk)9z}:PC뒊64{lmm_ezgfT5/|/&L/YC-NYv[6'f5`js"+j%n2{(I5C4 [)̅{F]9m<\]!1 c1KImT6u;:x>\ɟ*sfI-7 MU[r_0Kɕ5pm:]؝PL@!|ohCfΗ/wyp8<۠͌}sQnx@vw t5x7ݴXn2wh;a5 -F}~6On;rá:*W/v (Ŕ]Ǻ/( TRblJ,JpE74_,H~z9G$|3 ]LFa48*e wػSIK| a 2Tk&W{`.%NO"zݬoGfm+(5O@L<[ IeC,QQGWvie]R{\߳X6T}Nb\>};ǚB'm  O6hwd h.1SUi(Yo]4dz:9h|'@S6xٺ*Jų8C7Ż{$<ٻJ.~*<[Y)/q,Z_Yز>(ľTsϿ̞gq|u31&=UbIVh._*З$ȘBhZ]r13[7 /;108OCc MwGL4!AxLu ()9 Ws!5!,#u}{Ud @r%$#jmd5y)AiCNo.[jtt0#܈G:rvz\'\+Zo$i* ƒof*Ջ;,Obه 'gYY-o2O r<*16ϲfPiMxY@q7;6,@&[Ulx7qaHEbk&EQh)ZЋM[UW%itf~~3"V]J&BMA PXn[]QL˷̛+|,t Muzy61翅,oT%o}2yljTĶqAt[RIpSKV%$VGgyx~m~^cw˓^lgPO as*g٭hBw/N&kga3!3iLN#wf=vUyEͰ E(պyϮ3S[{YY묻Ҫ]6gpBU䏾 {?1dG5~C Չ׫5嬜Puꗿz?䯯{??^z ҽ}~:_Q,CI.G#8|GkqjwSOR3 Iqޏ7լN͕i@JW?L/JxuMsףYys\=-6H>-MtH"+qg5yj(8, 8R{0cV$y'ځj9u\m%(l׃2@dK a|"ϕsEA@*Ւ~* +#\R3,EEyֳwE-7Sb3nirn6.gU#"d ޟyj_&gwS&7޽*Uټ6__1 >1Z}s/d~v57٦Bmb:s5 RuR9廰}?U))2{+]UOWR`j;AWlvYе}MVGBV%`**LkuW%J)1R;Uj=u EبhtŸĢ+"JUgbו߳X(^s]6Vu =A0I]]g-t哮-z HW OKW+AŢ+Uw]1%Jj4St%'"\-d,"Z4"J|uP=ZqT])p)~EWDF]WD"֤J+Z݄[3'b)4fo_h`${u' ytU`r$u&pj"TgAeyMvX(7gT87 50b,RZY7><U.j퉦g*6K:MVn 7^ a1Gܿ^M[!|us%O?|ףꮅb 6M]kG{%"JbtuVD+OЇ;!Gтv}SZL1Bt.].3̴R뤫: {ģ+w]1JCԕR: |4b\ӚwҥGCԕDLkm,bڮkGL՗+-+z-zk]vOlU;Z>jI\tZtuHуBŤ+Ƣ+Ubʞ=t4^AT"`tŸ2]1}tŔ+ ~Db`qEW[Rau5ڻGQ@eM;N7zM5z?rkMG$ƀ913jӰ<􌪜.?Qmy(r+O|hn}> i?nnކF|d7{4Zau]R~3UeSPeqʦJUHF GT#jc7Qũ!(:nJ0 .XQ-Q 6˘ƒbZ"e c1}1%;uC!4D}"uŸŢ+5w]1M@mDb`oᢈN)n$] GWN8,ϝ:5DWL\u"jNŤ+>]1FWD]WL ib+سa,IޯvipmCheWv_{ -z8.]1.XtŴN]WM|+)% vq;ϾZuŔR$] PW4IW#h+hvسYFWZ:}D"`]1FWL 2!JiGP! wcN gZ$[ڪMSzQP'"MEvZGb0hB :h=~)L!C xtG+µ]/ԎVcuŔQWV+v 0fh=^WL)!i"Kih+wҤ!tEJADLbt%,z9=t?zwnvjEi虮d ]ɤC";כXtڋ>^WL)mu%P銁ޫ%ӱhUWOt5@])}Db`qEWLբO(}uRujYNXZvHFп-Mڥ`ko|*azGǭssWiۇ~Ѫ8֍T=GU:aRƸ?Kk uuzN@+`t43.SnL=bL3 @`ʾb'1y%bpf`siߎҥ!.tE>~]b+4>jr`IW QtŸDWL)$] PW61g`OtE(t,bgJƮ]=^ݏ]6vu VVQBW:ТtE|4b\i]WD)%&] PWRZMDb`ݭhUQmէ'ѕ^i8"]sox]1Ju'ΩK:[.NOsU}HC(/]R6QOo]o3/2գ |{E()Ǟ̿~S[84>>=Ɔ+zYMI8=(_,.ge.,ׯeZ6r}gϟgnzA#bk~*`mS)SSsUr|>omgYSi n@OKYv& w{8Bv퟽; ZbOJw>*Sqr6~8ۭvc֎-٧JYJ:{yW~ )Vc`j=$)})sxM!UDʱzv="ZȨ;J5;z&zapUkdz.2Zi(w_kS[Rf_PPݺwD;c\c1-Θ;`(+F+~U+UV۾)M0D]Yj!PD+Ƶ%Zw]1rIWԕ(\L@yQGϴ*iMf]Ot5@]ytZ>"J-|+ܳq,Uq+*nhz;ʾN-tIW=~ڭt+G+@th}Q&] SWT2"]1UqEWDD+H$` 2nw[:!+] QWZ5/O`n`X|wnrkN{nҥrT-*1c<1KN=`J/R1u:v&]1fh=ؾ)KШM$}yV;R>&[M~:5\3q8˗ͳyf])IQ{9 T~NI{P9kV+YyZ(}Qu i yo[>-pr')N(qlt\SehPG&iCŒFekΕtyr3f9?V{O|_|Ԥ7q_cݜ nsf'R.6V?*^TUk޼}-{wlWҨl~ղGyD*]Hn(?1dˎvϊCB8țq|W B{RB9i;2śs"/¨ <_g?X'D3)|ȥv+s$:q*Y)6J e4X)!W^rW|ﯖk$IiE׵jyN%~Z[*|չujSy\]XV>G%+х)rJz)4TU\UEE^hpA( !/y%&Q͋ed tmEeOl@(>jյ2eUijQQ\zTY6˼P42DށNcJ'\PQ)*0xKQy[+"*X[H)n[y9TR]#U!5 au%k 0X1΃&j)yK!T1S1RL1f7(e]WEttK"υ5G UnYy(hY0Lb0S|C04Kࠤ`gSOP by _V 5ȤuՅvtn^,TɂN5~t}%y E*Ίf md-@b$S҃Crf 'u'K =*2XkxhDPhc ٣.O9sڃ^Ls  i/AwAK|̡P=n;t ((L(,C)9iDκ$!;V@r]Zx _3XH[vRh˽e;Vi!G8 @ B25;9vǃEQHIAd&#E4Q]  VAgUkU=By0) !.|H$,ZJNX)l^ /` V #liPF+ h2+Pjj5]ZVEu 2In%BGi/^k:hI]T&ƐZjm&ǃڹGT2'I¨ 0 JĒ03 d~=Bi8!X7iΦrj4D \%t,<`VMғZ >E!:C$)`jI. r6R ];S@]pB=f-~{۵y>_Ѻ lm5gAi풳n4T~y9 ?7\YsӚ_h}s8y}՗Ǐ x{T2kwV1)U0&WzݫR53FLh-/T_YC?Jk,7.˻AX N}u_|YOv/-1@q#k}5hqUZ&{N1H-pQp4^R>ߴOew^O%y~r=L]5~'ckm l޵e=Yah{rH9 |Udenf8[|}5nK߹僡;c|{u=Ażzķogvħ{{#?Z:"{*"c^?$'^37 |s~7ێ|:ovA6%Wa~zrbmʴEҢ(.Y`w eId+6fJ2ybj=վw|/gfϫ5zѭ,ʇxeKV@fWT"^.?H͗}ϯߵ͵w wR$3 0۲NOήvw9JޭD~OxYzoJN>x{QnQ`>l!cr iȴ3m5?3_ 3Zbz󝏝M\>凞53oğ7V|dQlm ɜMmV4hhm!}F o.IghYyY} H/>0}U9'ݴ}{8(~E7|q'Vd]`,VLWdߡMIR)WlYـOb (=$=}&vhb!*W(_L@1 gדnuE!! &C}}'}R|߻ _f˳UTVY8N;buB|@=La2DmK00Pk7VyK-}-'S'œǫ ê)$n[eq>,H䖶[~G9l7SqXMhz=l(MԐXrv.(MV)*i{ ugDCry/ d*hXBWb6ނjIis'ժ'BaM S<hB`,GLG|]қ|9{T3vC>6?.nFf{Wr:9zX+4 ET!SMtGD)!RUU 43ZUQa!{`)zޢluM[2 \tI4Qay]O%9<1rp|f8taUʡ{;륶2{+lKhҧkmH~a~?w^I#x V>!)j(Q%s=_UW%[ s h#i枻-",v Pwfj< ^afӌJd[~4#fF| uϙiuapwqk.o 6E?4GE~5JW^+hOއ |AbFP1DX7feΐrdHQ?E;]f?V@I_QWCMwGJq糯 ε8g]r#Z1TLz(-nSJA4<*[OP)=c*.(JRacxL&) 8pXJRv4)CIYM[ڿx|O"1tɦ^5+ժjUlQk]U}(JAI5J<TćV/^`MLb0 ~22~TQEul Sa,F.E֭&uY+WՓœ,ĊjVԿOzSayS l1cjҥWT?Ȫd4./՟[DCfU&,,?.A b7KAq9\,VdlÐ6̢sgz#6f|cFPS/+M= b_=|+yW}B %52w7*qP3m^/qdU|sHu|Z&#˔e`o,Bm.n6fCb2l GO6l,م͆KsrAkLqjqYE2~KC3g@`YţN%WA H9Sl猽OU;D;N<:'uʌitЂP\ 2&=cҔT:Γ]9"_QKo-|\L;~.!Qt¨$ KAGkYGMl`juJf8L'jFƊ,_lw~2./ab'7-\6`m.C9&Jb;xT5c\}S'ı[~2O 01jcF1ZpdL1`S)U!cFY; ZDl  gq_H ÌМ'C'˧dv8jKYub,?nB"'~n`9}c,h];ZWL b:BxK@1x>"i/=*+i/ӹF,s!6CGmddY|r@ŧd ae. ϽϹcaIk,I,KldJvl5[ Zwp>buz&2[¼T& Q=0Q!PoMX! 띀k]PWRVmbtv]ٮx+`^<ӎ?N\_6LIiH;.ㇿz*u:El+)ހ*(]тc->\Fs59X!%`tαv$Y#'Y#O`޼5+d 0I;[umpU.[ݘHe5kq Gr8np!;Ǻ?/\5=ִ> 7l!3ӈ)!\Q"4\ 0̥Z:B R 鐫 з4pDXc+'ڤ3&8iUrafr*j;|[rDZ;??} g`-pT5y$ٴHέ lpW4FtIX{S_*xh\E>,:fOJKg F6GrPj#y~a@r SDΆZ J1ugf<6= }f?@0[ia'CTaX`5ÀU̼ Ť$r>:L4YJp MJQj[dG9g\[E"ឹ} A+kcXqxBj“0y.Uœ{ [~|W>x3]gK0^jd^쒃&ZQooL1~(bmnI $[b|sKM͐ffz3S=,CA`+ZE~6Ed[*FލY'Zm+C['zZ7KUOǠb< g@A- ?U%8s L?wo]wWo޽D]߽۫]O0`c8k'OA߾#Vm5 ͚4Am}ڥmv݇G4+ ڞ4Q_.TXnSygVEC:1+ 6,*7\]gq-dJܭ/Uq hk:^fK$7v; V?zL]eN98O%(Va] ޺v _R7I &uD9hkcIL0&;y਍i W:emvZ]~@f!BZ1JV^OOmiD(\!r -%&h;tpЕCz-'}VL 1N(fQxb#1,`^r1|`5a(QGЉiښg'zlG3Zr^;6S@. >{g_9=#Xj9 &RtF-lHESO|w--}kwޔS dEjV;bl}2 FRjt4f}wNF9r Șh 5(¿V8dyc[5 =Ug[51ǂR4ƣKd{ /}Vu,VG.ٍla"c„uV:F1%9MAgIhz*_GǖRkb˜(,Z/ix(jq ]5B0HR ZzgW$"﵌&Z ih&Ξj6c>NkSc5cK:ۗ䆄}pE 7_f%~ =ȸ0#7SfMsyK0 fRf}HN#%\ mQ93(GOUJuyAoV|$ۿ`.zXjŤ`Nǀ}h!kH9k?8秐jT1*црXq?@^U˜M7?HJ 9=0ZtgK`/dI`TC&2saVQw0i=*IMtu7?{V~[E|h鵽TG_U9.JUp YV%s(׳uj wW^'ڣ楒h0RvEGXh;* ׎׎qZʆԜuVt_jȉW<{y>'E&}\:ޝp`Wy_H+J?GʶR:&q{-)BzJ$R.㗓3ĩq_7=h)3Lrd 9LR3D ޸hyudF!t2$PaH"JPB&(2RHK1^K^gcu'K;2̵iG?:2hL3)`97ycJ*/e9n9-Q1䄥 ^*m4U aU8Rn+帓F!Vk/hN@kOwm;RWw@>:MX,pB?%)R!)z%M$-pzS*[)wAD')R^ 9Hh.XFF0^W67+h/jdTӛ7xhy{B$1dh4h+L?j˗19iu:Il㥷Mt_L}:x:]:q.jiM"BhJBFt 1h2PRXkJf'UG}\=_w.og@m}l-טY7-o` Q=q@E/ <v SXc {;m"ۇ7U^ʖ[B*fvB\f*(X1cDA,/ P\cN:*2-'%zY| yܔH>} Sy|t(Jpጦَ;]Okqiz*S(#s4|ώ~+V e6Y#JbŬV!8cL[8D9dRU'GN1R(L@QfLY1L,U&4U!OΈ&Ufeg5s6y㧯|}|9:03~6tfYͅ!6jr6=giws[9 Y(s7.ڠ8s&XrE\kPb#I !Ymd;Av&Yq6>M^&{A<^ɳq\6uyXc;cqEYLdə\4 "L_cb2g: #s b K)O (ŀRfo;ܟr[R?XB\Ef*x)A+Ǵe  @2POue;ɸOB]-|8r.0 2uLr''98/9>l19FE41huw$drfE# ېd6-+&z$ )A5z]6Vy|QaM4j~YyՓrxʏid!ec ^(FfJ#H͐ٔ9tUdTFXTYIWee'(ދzNw49)*PB j#c5sGz\V󌭰URz05Φ7\[7zBu|<;gC#(PE)K$'X m0Y8hW:dHH%أh 6"3+ۨ3vGЪRfݓj8b .ռc/P{q;㕕A91J1,R .i(ZpK*fo$ E  &7Aq$,# T'jfx Ƕ:DA¤dQh<"g4K 4 5!C0& 22L)'lVDs}0TRE2!P$Q Cٝ_LYG s}Z%⢨xDyps-?"k%}b%[+"/Q &Bw̕6 Yb-Xq=&_KAZۛFR`s}G6 w5UGnr7_޾ۈZBi2ذAb. Jkw.V*ӡ(LW~ ˬ}콱D?XԌ/.y5Gt =G^n?|"MFaQ7k~=y%;ugX"xj/6˲aґ(c`Zh$wqL{#n7>)WRJ `8 wS6<䠤_)*\ʴAe4 h 2c\/\NV=`#빕-5f+=3a5) u 8E")@ `4J3>#[zA5sjq(3x!dRjGm!GbkPit{xlKLXrRZW?ck}%rS*x`y,hY\rYu(l01TydQ)gE7K@8uH|Ș_u qƼ3m5P&_RTt9Ƣ hM#"9 YFkq_:["=M4%a25X&pQp%M,Iaj>*u^@LdVsLRin2B/%kjhN:xc,dNE)BD㔈pn[N-V.m oD3`l=:%.eCHFtb`yHXVp)7^Bfk`7D/ϿZ5oQ YOq5׻=e-4Mzv.4J%?nrRK_^Շq?/6̏+~mokSӗxw fi[qٽ=.yx-%@0~tɎND8rqOG QfvlQ,9$z%RCȴIfanD7?oC枭l[wr]LN7з~FҰ$pf^ r9S{(FUk.'~^|0= g헷'?Ϸߟ~:Ƿ'ɛ~si݁I 413`gr57~AӦV޼iap9 ߤ]Qkڽ?|q6=@BzA|槹NP\RSnL *p=8 Mb/5.ZVܪp[1 /v`tK|Qğ0oߝN={=@!A,ks0idaPNb7+o(TS\g>?|_JP!bf2 6j &,6VKl1d9:ٕN_cq}Io>˫p'gA;Dgwlޭ' GɋܖbjUwNPjEFcM dSIv4p(bVz9o0K_ފ@U"#,- }U?Oػ>Kބ+<68:nnrsݻAȺG?}6׃5V}Ւk7Vl^ɭuyCgA1ʊXrPKAVLmTE/ZƍzJu[rggVOǣґ:\ZU_X֣,aHGF'ێidZzCEesx2u0@R(Ôi09x]qMJVƎlB\nwv4s϶# ~R>'͜ ,fa&%\.ʲ%1[5%4fBK=lZpQ .3"7$^9%9$&tQ;+~X@3/>ެàe pkYS]]pWTPF "y-H) G4YJ[URj)igD&Ub7x {$z&{vS%R hps oV.y(O߀ee+7dIqt_Ҩ@Uj :$pcD;Wy]TuT(lQ톞!ygP$A EL$\d"e|%CQ 8*1RB Kd^gmmFP-D8 ͑̽?J3jQ5s'FwQho9㖪k`Ǜ1y/W4cgu3QՓ`G}^rU>#w<@q9Or3:JoGٻ6r%W{͢> }.pg"Mːdg[lI%Yc ƖbVUTM-@sE2LH)aݳ3&KUQEB 1|TLT;Rɨu#E,He-X)ɒN{tñ(FBOlgiv䇋)-<Χ,Ep\fi4ϿSo7SoHV6HBB ATBYg7")~!68[WjCN@sj`cS)LjEweO~MWl(]暄j_턷\eeP.bMbդ[e|iŘnpTczm\%H.Ϻku| [gWI&NcSbyU,j n̮+^|PWnpP샦( ΦOcם󏻝~|`{tibW\08% Bnm1c3snklR=a[wܺں[=bGຝw-;n٭gn[=_5:%r>ٖ6WS\=>sy?+X_}tG"ȟ]=ngO<,1|l5 eć$9b`u≯H;mo&b49tQՑd"y83egBhǪ#O:)ƪ#wo(-RA#Dށ.) OFc@@YhCuWFT|Bӑx( ɔ־ ksѢ !S#'@.k^ӷ SEPAE(RmMb{ĹuSY^TʺT:&lS$8K(Ɖs!å,,DcGP&'@ cp` ֪IZv\t\$2/}3Lވb]яEژcg06 TLC|m{fܘ{9n}t&{ xd&2OZI~Rzq\LL~d=%+.fyCZխF)нwMc[Ҕ$hcb&}1YQ`ݞXO#=+x ΁p 2Pf.EZS4 B:C7:_!=&O~]CĬ׻׏kk=lXGqR27&hCLCǒ}V"`R3Ô 9,%gPhrLX D%S3Bw7XǁdK󢫻+_;" чwcDWHi*I"*YsMOe (ol1h 2vfYV ShTFY,df6׏|zcqɅ:X]]5[4ͨv,QVWY9 R6fH!9%R(+$s-M58iKeJMJ2H =˛FV*l٭wY@kRq t=ngۧX%gHRfCZ94x"ί8hf72 v$A"3@.<:ߢCiO9^a' Y;Ԓ r$ޡ`BR6[TH/IJ+){R45@8jؕ<K0 ,"u*<'hYژT &!pdW!TlJc (!e[@k++ʨÄ$^idy85l9IT<hv/C2$6bBaMP4a5өA )Wg]cTZUԎ4*/ 'eB6H|y]+qv#v5XP8 ߈ڣ;0]7WJf0R6.KI[N90{UPTy̱)XV29k(TVQK[,NuL8w7=+0 "6"lrDҐs0D*o'@[vkV!t}MҖ)(7gk yU*D LvI3Ğ2^&ƈL=Xf#bL{fɡqqm3^cr@Z*hdO1)aڃt2&%k mcKq5. 6ǂ0<|kXI?ۂG~A:0>Xu/oT! I1bgʼnu!Ee|U=(lXc nbdΠi\phQde` %Ug2&XшIxUt) 9AYDR@ǒLJNi%u>;cgkA=`w'DY(B]X 2'Rk8Rs9"M^Bx.TxVѥ,BRHEںXd:|2)& 7 Y4͌+\`"((( %1SKʃjsQR݌a6֚uSJjr2&p5@;cԬD nԱ#ұ#/#<5z"wX[u[pfиHہ^#įhRe)Lonh1 #l3`lZ1)KqXE{! e+[y;<-_)dpЖɔoA,$7_9)TQQz0BlPJݖtL4|f3aN娏&o 3ϧiY>owIq:r*< l,T`zT*Ui< l,%[coVbsgnvy]{_<%%҇_/[Z6L L]H:PKdV磣 2$cHl{jlƠo,HSh+3+oQ cmS `Q1Vy%j)32BEntR7 <ϗv=M{鍗=_lTrD.$M%`J% ","@4QaPK .#"<"Թ'F`VAG b(p)2O *'HPHYDжxD-^]"2>A1 p|-1ʩXQZ)iUHLk5gFR=+_-v*Vۻ>,_7Pgsvm&xFgm%( 9.(] V5)Ȑ%*GDx|FyXi Ȗ|*d`bj6o*bbvl 1y7#Ǎ##IY'^x;s[酨T$ȁ]rJe\(}7Mha8TchP2Z{vӸ[@nXB6ZxUu]n1KøـKP%X-t ,Jp)cw4!C;Cro!Oy|?{3Mx}ю_gNO }]vhVors^젋Udz7^Z_( L# i:j;@ʉO}_ʫU{}+\Vߺ~emzkդZW_&ռ`p9A:.WU6/VBoo[77C ?ְP̩w[]+[>ڟN7[wݬNu71BQc%_U>!0%5gecųs$+5C|aC m]MPd,P|"A[$Z;TsŎ>oy'Lrw{,Қ$x_>0@VOqOy=hj:޵-#_a l\7GlF>vLHX&դl()/T#f2qN"/Z9?=Q*B S}?79xWۧ͛'}Y0Elc5ث7R=j#e]BB#WJ [+~޳#5:xk< O{5*dB Ŋ\+/zʤ1ՠ]+xK$5&FT>yrӗƥgn {o.#T+EOJCRis;cOx~;]˫iցxA0%2g0ggI$g4YN׿4=i z@4j pv,ad7<r9Hpȝp)oۈQ>((tIMQ*h_ܙx-yF8FT9$}'|jktb_GѾPksA4ϾrHpoAZк `ÀW_J!/Uhy n$3 1G?-"&0gb' }I*{GUe]ɸr?{9]7{2Eryy2}\rk#ͦvboffޖŶX] Ld"KcT$&uS^h9PbF *>-vKr˹+!F'5 0Nh(+Q[ 3\P'*6Ikt:{'*C'7IH9 \RNc.\Ui\U)4W9 ẗXբ6bٲ5/khC*\L$<8V?_F颀m|h&~!YE-Ӈ `u7Nޭk\U)a0Wo\y|تAWU\Pb<\U)- 1Wwjh}}2xZW6Wlj+_ '-1WIv8 s0Wn0WN;~Aj?N3WUr'\Asu= G _ 򢕹J{)J wsUj0Wo\ 2W$bU*sof o\)/k& }nj/|-jg&m֌ke>;^7̃99#H -GlPh>'yCffm}:/)"  u-C$93kafϙك ڭtgMԃ6Q , XP$Yѳ" dٝXKhUqk[?e_g[i,T9nJV3(lm5ei}2kal{,,\=v5,sB82҂6upy;'[_=Q0{,6ĭ蕃| K&.!FAnFkmA QhmtE'|A)T@pDoPE($ ^Gc4`3q!nZxVH'qt`#%F$M[$4 'R'%w/K6&Dž('~zw-99-Mqt~,`ڛ .qmγt>RMN놜gI}d73q:Ͽeٿ?j(hMbݱyq 4٣:;/8*.1@bb:$߸BEVbWRtBuT0D(g?z2!sq8ܗۍ!`wB"-cΌ~˾0P9TnpPu;~324aqUO[>W6xcz_7۟7>͟]a8mKrGϯ{uה+]^;XȽͧU:v*Om2Sm7ᦻݍw<߳_=dt'[䇛~2'ݹ=̦=[]s ԣx<z.ăsǛhԛ/WZ|m'@}s;>Uqӭ0u~۹~ߍ#W\)3wp#_vHOi-`.J61T*z̤ӥ EυѪRp'<6u3Tmc^5vJ̣i8ʓ.b;2;Fn46ws]S_'&ӻTesmT^85WҎaBVw3&#F6|q;o?blz8{רb<|m3}%f\+ dh}',K28sވ9F6b Z}iXAdiP,!Is&hB&8Ĺh%17I' 6,tӏɞ`9ޕHxiH~dZ # +ALp^&~銶eFCG/.??޵Ni{,x )Deta&pEW?9ϞOI 8_kE&n][ȣ%`LS)Ag5K -("P̒F98C̾1l&΁iM}I'ofK!~v7XFt m&&d➵—D6+t"ZUԚYJt}2<-EB/a8q#P("{Lԥ'/t.o2}Dvp\%fKUj-ɛ TdzBc n&gz\ˮk_{ܗϺS,\wY4΀yr&\wWsf6 (I!0K(Gs[$%rݟޓdiO(5Ԗ Yi;4hRZXuZʀRgaUiD$&s'_jSbϓcWb;p\<')$N,8EHk4I)BԒcC 8LVBe ;& CP|ʺ8Eע sDR+V΁ <ƊL5u :k}HwjNNLwҍӝNvh5/׮L.xzSGͺ rq.G|# 8AE%/G+s B["`ZFlZp}C,(!7R ،FWA(dZ[fVaV$ V$L!H-d`bQm8HڬE[g?Agq]ǡZ{#nu\ؔmXx; {E 3 7&\.#ȴ\s3C2dc$$ͥހt~5q`u:KيKkiɡ~Qy{M)Dm1T_ W)FdbI B=xTFGQ)~q[ӎCq?|b9 Yؖg?>Si KplU-UކDICt&'J5LU?ۧDXIѐ|(4^(HWRrTWN FE ɫ( &[>FòC/EfAGQ{q,cJ24YIR|BmiK=6Gu1 cWٿdr #D` #h `%ΕSVdR4qNs P=Ez{r pK)t0Ɉu \y2Y3pJ 6 ɜ0Y;,Mʅ6Ec\+阘WGP;' d }VvP<خ ;$WŔb',J0:E)$h'ٟK"mC6¶YuH]⠱mOlXOQo<ߧ8Mu^?~$fGGi0\ l$T`#q|1HZ#;_\ؾ l a/1W~'zJ Ꮻ6 1BLJg*)V.VB\ dHvcH~HQ,=n5bf~Ic>ʂ: g opAa`P1901K =)3B/I3]hn,o>^WmmXsMPv3/3~=o9ӝJlr4NBP ^:Z{ &p1 , r#]^qXxťwY:6;).H)kPp <6.IlF< vGBGDiRfYY( h( Jp<' `(0 ۮ]֚8{3=f@@T +Q$vhrZk@[|ݱbj_5sÄJrb\v\dM^x IW٠zrI>=(;Cd:?4|nGE)`RVN Y o(*""lA'YLTHipQ&=_ID)l Bh (BA &G'Lzo;u=h}6 ꓂0n, B/ċ/o]f[nLE[Qւ&u69Pt 5hr{x?vF;{qѼL wtǺhɇs1}[fq?/u ٜta2=ɂt緓wiN矿:YfzcT]D%b, %9EbKDG3unjkq'C~=3kLFd圁q)rG^?i1NKJ4\::őEԵl0 sB.i)șK߳4V$v4@dYC/^V-Q@AL8.):^-~]i6jM{V{?xu^Ώɕ55o;Qz,TaMMjiQ~7\\)u:rSC*[ H |3%yym6Xy<[/~se"G{j jf\ajm}-$=/>ojA(rma+Z^DG,2unibbq7:-m)I)s|rsc0p0|~8_11&98 =^ܯf(J4Uuȿ5fIM$,xv:Ba*H;g+e .or<ϞɖQx",$--l~h{tx{IW p1^-fľEZΗ7 꽃013e%[}P`.|?nhnu݅[KE6FC~#9Mε]TtV9FSX`9uDri.lVN>'P(B\~ Yrs[ _vf_ܝJW&HyQFkD"$Hӵ !Zk EW8[Ӎ^ݸJ)zRO LBouɡgW?kᘵ'\uU}VE'nz5^ڑСRI%)FgJ[)ci%#7RZ`&ϥEfFOBTs`KI{o$*69!pB,^* ^zg /DՖ8~KڞUvy1n_ގGѼPT-A?u`9^Ύ4̂vǫJ-Sq-HFkm!U1Uɜ"*@Gad'й@:B7Yځ> CNiQ) dhA2>Đ2]ܞ-cɖѡO#X^4|z]@I(" sQTxo,wNh+ZagkQײI*\n_PHECT`l-=v}&.T_&YMd\e?[@DĔ*ͥ$[9z+1)c=R}$ЧsP?u) HwaUD{W:yy0ƽizQb/b.mCLNxDǵuW +0)| Qcp[lC7{נS,ٵrB<uűWV20̸R~T%dAw  J#M"#6,\!Y WmD ֝diƚ-3aөh{ޘ"eeIr`t#&$: %9֖`:D,WƀO 8g("Xe J98C[2:v ꭉ'f}}VkYk_Ҥsd,ʤKtTv=0}{C;Jג=itcORm鼩--mUyGfi<}d<M|-(.7^WkӳJFuVRs|3匼WWpbF53[դ?/+ /HN~w_ޜOߝ p{oO}fSIP^߾'tm[u-ܣk|ӯho>ݨY hzf H‘Ǜ1~5ŒqEQz̧sU* ~\ Mf~(W> ⲉ;$KWϱv!"€CIfл¬&Ě/'e.u͆06&)t*BGY- ݬXp=Nz˫j..u*  DmFin3 9eMLR`)d)uZ/Uk:q /hJKݝ3=LOw$lԝgI1z)8ca9[RZj)e|z1-84 bUWR0QrҫL†}H}{~d3ayy Zd@K b \wV^^T..~ u[B˚l}ز_, y^,/f8[[P"_RˢcK{՝i8K`VQEqLü$$MR5-7P$k)0q֘\pzb%6)>^Q8m̷×FRE8!ɇ^$Ѥ 4P9g0OayL$'HIOoİ`ڣ*#\B$Mi@IP7g,i1Q?(O*=׹Sdl( 9* ASaV]1ʬ1E2Xd3c0/K6̊ ;=x>FWw-bm3֬E6(gI*&e{jjwcϑ$U,}֬Z؝*d]=GRUDٰs]= ]"j ]pu[*UtQN]=GBZDWWF]E5%EgHWjFnE))F =zc JۊM@H=ot? >q7@4jE}}+΋!^ܛT_;x!єaRmMeȄhIG1HZ;|q7@Aexvy)-+%U29uz꿈)㲳}6ء˳78 e)sH[f2{eމ8v&(d_23M^CjB(-E=ImZFhGgeC3ac'X,} Ӗpj;0gnR5\+ծUB PW[fni5Ҽt ;ztCAH c"ZCW.m ]E5"ڶ󎮞]Q-o<.>vhI%#]=CbT ֦`xEWWbhi:]E >Kb0Mu"?A82*AZGmI0fryW^gߚC.2^Q_n`/ˢ9]W:ZT[DFs \),|pBb#њ{|&] ,³Qs5-RFh4y+[3h_xN=Ge]Op4*=UD{wAmwNϑ$Ut%mTZ3 t(U#])X70oϪ^+D[*UwB(b4BOpվ*eh3Qv.ϔ`tp~~|ray4KY>+G'f|O.F>?Y."/W%g^R2|Rg̞u6 a|B!ع6wf<y4HϯKjޭ匞謝~;mm0e˼+`& .{ dЈXD9wNSw$5e!"}si3,! 9J/crrh80\moP rda X^@]\k>Mz4>y!eNfRIVf1Y:}kb y+Ӿ/Tp(2xj2qI1|?˘1A8hdNeߋP ޏ [[f^r7C  En.R3z瀢*OEWfv_2Fz\Ob5Z=ŏiX f%gJ|^s^yLnOG'"3Q CLBXj(BixhsshVhc ԇ˓H/1Jv IC!fHA 'ᛳZ{1)5X@UvL->KY>?45IHGdzzMԪKyi ;hfZ/R- >Ua/Gy%]7x-y<7R5ʶĹ o;X"xO*O4HF7;Hg2ȷzٵHܬn'U梟B(}ӱ:S ϏGsߧwt۶xsT7C} ͠q2  l4Bdz xJ+ƽuc,(z[l2ȁ0~pOKF[,1"zlaϮNq+GW/!Y2u&%" 8ɪqN|dP>R)aq(:"#r 6CN Co.t0x$eS u(|k_Q5oNHd:,5o?Չ< Wlb]A2 VŌ+24&NQZj ՖDXi]@9+脌Ϋbˆ^FcD2mZLFԔA_!a$EV 1{{s..hlK/ƣE$mwƳ1zp {­WQl_|韀96E2k䳉/_?Re2sR,33 ye ^* a$%F)1UHau(m2!s!l14cJ8Y"!KI& HepWbĀ%˨p32䱏'NbKI I@XZY0l31 Sh@fxdulq-VeJޝTR"D:; ־MrM~t>9 j&He# #4%P358eLfıxC͌yﭧ2(zUnw # ] ytwf46ꝛ?oy~=˴< V^>y`4Ђ&44z&OS1DX5feʐ,MȐ,c)y_:EQ7fCgaz__+$ӾO^Le N8eD]R#Z1|N^tߚ^Σʚ?ݬUg\?8;z@b`%#s 5d2nX͌>qsOW$M *r6~IEb*ΟSwOzhKo33N.FQ oވUUEU"V>%°ط()%3X}fk=A Tq b&,7,f{7g%W;Wu{]DZm 0&c7rzMܤ.SĽe=f|q' Y߫~Қ/ʥEӺ*EX^_[UV_ZF@3x%X;ZxCBgr菷I8as$!J1еz Cu&M\"6?}3,F~O¹~TI$dL1wL"!Túbwlr.˪ WH(*=yOdox{^ʋ(w`Hh6Ю,v; RGwUvWJn֕fӓ;2Pu;8"sRs'}"iA BUƵ"XdsϘ/ ~ϟgOZ{XSG}0Xc,1&/6Ӗտ*?y~h[;c9L@4)In>5p/zͬþ#^@yL/~1tD4p"zyحC0 ~ɉ) Ug&3[tݑȊ@XQN[@)nk|}X+ { ~>_uL*r-Fd6ԡyJb:Tgy~1G/@52_q{}kD벀)⩋NޥWU|P㞻S3 dK!JAq'[SOYp | :j;!8B7\ Y !!d ae. ϽϹcIk,;n 4ٌbK%II >a˝UO[e`]*Syi0TDgPoMX! h w[(T]i1TwИ^wmH_ѷác>wsٹvlfAEؒG~-[mmI["b=kUUv-z @hޕzquΗWu0_GMqK5N$ʹ t2&0mX+X1L8_TUGg'L,J[F+,% ~&Q[םڹ;u5pU.qo׋X=yuV]Nbc >F + ^ S&IY Xb&:Y!.xN95jQ3k/6ބ9 c:G9tǑkܬq>wm:Cz G4T}ν_ ;ʙ_:O|yycfdGhLʐsrGsvy]m?P8P$lon}&PA  "7K! DQ"@|,tD EN;m H].D.r^x9`Ro0 vFL_".:3 qL_”b֮qyyYpaw@!*k^zt͜.CV/O|L5 eR&|! 3cv)@VxҔ o)([}}|tŶBETg夽69˃냥4:vCyp>> >`Um!7y}{[o{whKeJƷ쩵𒩢)ZCgLmPkq ;Uipl R\YpUG .hclJSPRQHo^Xzգ{QnA-Е~;?2uSH@HK)iIJ1{λٜQp!$]SͻxSXIBt5qܑkpt5`8ȵh>"؊qюQHgABKRf5[ "dFAޑ:HGtMC˗9A–ײDb!(![>ELRd-BRH2odCspo⇢-Y9gtI<`Rga0Pqy$Qg,i<>AΑVXl(1%J&N\txBko\lS[ ga>vb׼AD*В.R+0j2^[)H%ki\%[AڧMgmF(! 2" Qk)5'טVP) 9}ܦEF-;\ɽlX^i.%nu]gAm"O x?u[=?Ӛogse?Bj\7j-uzi_Lg ^Y\[[%#DEP87+< % Ӈy8:mѬm,y2Fs:.J+o{?\WD_|0ߪxŷ&y&+gBzg`mF۫{mT_lۛ.1l" G|xz6_YWmx{ҞA#Y1?8#]lFm&c1<{༿9xtu1ٮ;oUݣ.'mmsZm5RB>3)^eP=';? 㡖yo G˃4[K9iio2ŀ26^X&jBڄ:㤟'}c7+huߕIeb{XR(,-!%9"zY1tЅ_Vk5wle٥c9_w{:>ݘt IUYR x53/u |˦=c_㬑 o'»@o߂b6jK拏\z3xo^]L؀o{@:go̓rϴs^i|d:j "0dlU# 6jjBU> X5;fֽUlFZ1I %J$Qp)% i},ҚD\2Aic`3S`LtŅP@IJ$eqEr jȹޅ?3RxvN2g^ރ+͔uK|9T-٬{s(@W|#Tr|- Q*{CJe}C! Ccs88s]ґ6YA"_˦>TU<{`b9.`'1~/Z7XC9pPPKBAls_#a5XSȒd˹ː3E:Rz*("u!IL.*۱9d+I%1g,*09Ry(9AP-2Y)ɒo  bv5sڸ?_Yk(/j-rJjin$-TU|N'U9f*YWdئ 6RlP)}!0'CObiq8WM[GÆ_5z<][֮?+,_},jzo%]\{0;IYt`fS?zѐRL|www7w?s4硫{)2W} ͂[]{7Oqn~;;+zyސlrϿzσo?֬w$Xs9M0K=/)2#p~`@lCi< gdۅ 7&ΈۀeBVG4}QQJپpfW"O䬷&foK,T'-2P2]>ݴudN(TA{!HYAOdX|>ySX3NԬT3:{0d %:TbIVYҺs/,!,GTLV߁)z*"&Mh&Ťʬ febluai}rX=t V)pm3TG.L;c.@ْXx,֜o"@I`BsJi~:S2xJI&xH˾eW߸)u-8|f@X};k :,ӥHE=KZC5YMϺ [?bw |c}líY7k׏Mbn&?@P0PO c' Z[ygIC$W&)Ac8Qءh>!T0){)Y,&kP4c3g]AT2>E}ޡ#zpԸ;%{?f9 >zy \]/0Y(_W@"s~J`ЀMSBJ𳇟=r` 䢥Ԏw-Z2z'#C1쌜{L"r2O.'O "Nl÷=,c g$jjv6FK[3Oz?bKeq&aVĬg|X-_c,[ZciqHV}">*=!PĀ ط 9aBn#Y( #$YIP„1#-za(2xAZ4B&OSi9_V@>oַrp|m |",Ju(oX'(oBTAPVA!԰>G -M!ͿYK٫0*m4H_Su`CL 5}}LO60l58pKEW,e QVJ\ EGH,V ESHʧ(j>rUT =9R$cSqAq6'=U 88O#p`X8fl=bC;t3^q fO85ygtt|}z2Q14 }$<:vBBn#= D"IS9:>[fH yàxX $3o:[ƚEDEdǑ1/$9'<8Oa/N/8~lr`D ">XrJS4& 2V"<`y7% X-#C$*C&DTHeX@ gZVdA3!H1:SI+[ VMC }ZLK6E=0. x*rM5^E JE֖$^zgLIcF3pqiǾ war_OcIp-Qktn\]nUZV)U;W%Q\U \V1UZF6zp oKxDet7rׇ!gG&{OrB g ~pmo7Fz,yIIIuJʀK7^iPѫhR\ _)^׬ 3׺灹 [sŪ볣Z[ g sЈ%sHR -Qs&*`3F'׼Zql+M4~}{)ܡ݁ 9ugsAkURŖzJ_HOӉw37vDᎿTF7@jnnÏPuheE.GVd(ZCT:%%EpY!!EgU`Uq%:ޚ4:c)-:{љ > b:*C*Ť*%W/+X񮪸`ꪴV;\U) \@/-\UlձJC*6WUJW/ _] x0pU%BDrHxWwuUT9Oɴ%>̒RbMo_u7m3ڐ?^jqN tX@Թ(S -I$Ps7ѯuG[4։[zU[6jioF]f?pمџ~_;^ڀ~[G[ o(Kpx)s2d7y,TژQy@% bE o6]?1Sퟟe'̐wݱ~X/p_GK ;/s[AL}rO21ͫ 2?c/h\$cYi9,R`*9*/V)9/w(kQ7FY(e}oQ7F(e}oX(e}ovoQ7DlQ7Q7FY(e}^zf~-؛t7yF761FDہu5{M%鏻LɁʷ@,3U3MԊ؍Ҡx5m'Y$ Aޢl}xX,e0.R QRZsJrǁ"OA)+,w!Y'ihx0qW\&.q叇w$'Ʌ{~:ļbvqio+ꌽjw81:br$-^$LS9ݞ`!y/FR¬y2rP1HFg "2,94qȆEO)Q"& ;8@j`3}2Z) Fgb%ʕ(b$E QgD[(7cمpZw>ݮ#_w|rI +g > ^0&,&hT橯̶ Q :,5oڟ\twUX&:lN׋_O݋c3ma999buuheD]zi|q^ˇf)g3WT_V+/mAۤgj!<+q١<}[DtNf\٦KNNH49*gq >,R-^8y~s>9 yvs"8;[Lx?{f>}i\Urp*K/l YO.cU6hΨ]Y`ʠ4%P3 :g?޺$EI9 ##wgȒ9)9ZPqt>>=Z+u=$ âm"H [PNhP0CҾ G BXj]4# B&"UC9UmN %ًv64/t ].bKF`K!J*>JisPu _i RdI̓_RnlWwfOg#PR%)}c J* f4۟.!P؞ՈFWf4ŸUY jˁϴ脶7#ڙꚠUt&@w{58=G}W,'|V ۓZ|oJ9NJwӧ|vKBd-#CxogxGeGdzģW5`:I-3.{l{Yѻy_u[^-v O=6]ǣgO7F/?_`1PY<rr61`ߍfgpFmޟFLo^XGx6^z>_1.OFsߏ~xi (hYsBg(@3E`B[o2@'B&dR|mnlū^n- )Ɠ2HوZa@%!K< SI4l4{M$e=O9. 9J قbp.SX3Aڔ^L-{NCD{+;~+pPQdVr,,>dI;ON-{`,;_u=~Bx5[0 7:e꼩H0m ,sv{W ~n0`x I(TQsvd$M,=gJQVbVֽ~R jBqzV_:`x~x}'}>|P_˕GRQ&pPH~4K#̘9CR&Z3ˑ!12lv:W@'QWzah2ٳK^ p%9 $ KQC1abj7-ѱWz{1oD`/"y! @$ /-f3{CiB̎ƥaa`?"wJSgwj+wSx@'lQT˒o/ժZjUlVk]U ;C 'o^u+TR RSa&mU lQ>JҜVVֲr%coΓ5bݺn"дV'e;YZ8ꟍK SKӺdEX]oUڭ_:7E, ͌3}ߓ5$lV6Fht6#s WkUnv kbͤ?~;1hoK+3U_0Lc*Lc6wr>)7--;)Y )yyQnx Nb^f$MMq/OOn85f%y:#_gaNʪoM WG&I½67. G5~^$ln7+<big7[c[k#նv]&Xw{Zq츯" p6a҃iKdWcƃ;}fLR${CYճe_濗8z&i_`<=\/F&1'>>4>xcer/z=98/fN}'zx77w+=>{yz6xC)F=oҩ{8\.qGDO}L>m_303 5էSCbtCe6R".x$^i{}CePƯƼui?UD*ҩm\\PsJ\7XVq r"CKu_ #聠^񬒈lk:8"rP]N߭iA BUs,HM {4Hpw 3F%ߞ1jGX}c5 .!DIKhGk^L~+:zixxrvGٙngkcLx| tWNC\+yIQJ19ex UA8Ja7/ʳܙ\ 䤃#u(g\Ir# j~C3u٤\1xrr9N[ 01jcF1ZpdL1`S)S!cFY; ZDl  gq_H ÌМeW;CY׫˛گg|z@,b߾^pd#'?ܦ: /ԩ.*3G"wb{HS2{ʙjwgG)KӱlDj$q9\xl4zHfgQ9ZZ|)28CF B#` e,޴V"`qtSiZ0YKo4ql6[%VFk" jA)p/QZrb<{6 T!ȝ϶Y|5(a6_V*l)̰4N+Jjgt3J}VcU`9RLݎ©_ OwB_S  v2 H ŢJ]/`@zιXCŸ Dtt La"gTEvfz(>k~pۛ4՗rθYE"qb\Ê< +N` KQBYU<]uUyd|R_؟gWM6Qsj>T#KldH`Uג)E 0÷'KI9zUF /@dhay>ytP0vY4a#A2ߜ hc'A.<{yWojn>y_n>4&44/U;"8M&:-F+U-zHE0| UڊJF'\% \.qQe%M#xR}2vNE=I;^$Pt0 emʆ1/B-FUwrn()ee;x)Y],3EϾ`^< \(! \H{ٟ@[ơuE^bAf{6R64~.O?+|?9MD5w6<3 Q%|lY5^>Ɖ`5f g 3Ojދ_wz.8z H?פ Oj:Rjsfȵ1On}&zG[P.S_}JV)Q)~muK:~e:fU I1q~oHkaRv^? vB䌂*gRbrjL!5[u5r-u{} #L 1N(fQxb#1,NzpkK_./4 cN93Xp2jj[YlkzֽVc1p,(5A3 ,Zl`TG6-#q-ZG:L;$9*F#פC2p @e[9Oqt[(CHo<UX-zgҙc2b=6HFs+%c9[Jenr0QYa% hi@ⰳ(jq F-w+r(KB7=+0;P'<8LFd4 ]A%I]x~7mN%lxh|8ka1" tmjrmΣ*z_:D~73v0 Y=,ބA&Akj40+m#G/y/{Y3xdM`%$'q-v벭2mKv;Hbi.XU#>9Iv//1{ 6pnǍ a"Y3WqzE)c&;&'ˑ鉄2VVl*79qv) z&M1xa-:͊vףqȸ9J\za\ɳ\K(#K՚]^?in.)klnl]azuմ^=.gY nnjb [Vnݾu{y͝Oyz~k>]d>mxyL)sXo~uCAwq9f&{ˌ5ӟ66KR7< A|H6WyC-aZKX8䠢2LtK>L-7]{uaLU&R)wT6B[zmp:{ "e&C3ajPs M֒,/kʃ=!w_;3Fs)Q'ysV'y X*^f|qIz ҈OR`RÝy 9搠 ~CΫ Q-7Ϧ=j0||&0HdCe< q0: 2/n{̏p,in껪z(YpӛB$7:R4 h "k&b{le$Dqk ݯj~5ͨ< |QthG?tsjRHx N{Fj _⡿n0͝r~_"lzpDP>)ͺSwPUxU噂 "UW '<53~ǞDߣƞr?;Ѳ:;z7ZwS8ܩ3 )ȓd=U܀QFhxkdʓ+AU5'7gggtx9^x(l`IА%)5RVSW8f6(ag۬۰?pR3.H"E \$XQJ! N@[ _{4=Ѵ9mCy匡'yK(JSPgHe7.iN4JR(#TQCJƁ@0csbE6zU'1@,/Os„غ;pfl 7P[6n/4F؇U?f[8 И0"t&1ټ0>$[F?e5'OjcjK^$eq: zQ6ըK]PN%9$#; ׻>/bTg$VQ|p(%55Ag\د~u?7+|e $h_u-s۟-m/!Gȕ#X>Ūaa0aAQ91$S͠߻.h>f;^790+GedIu\Aut?6}5BJ}LhgA1tUS<{Y<9_~Cۏg~xF>;\qb*(}0 > _1.547kbhFJzیKNyŸw2s6ڮC痫\ק\mmS*LԊYp0$,rOmxyJ5h vbs^-8j==j,^pmlNQye3 T\ %St)U&'Ht{1H< gL;D)%VG)bXt A{":8| z_[C1ak [[B;J pf!;{oHuk95Mе&A3A=\w,w?]=Qv&|M2Jҁ|o6j#ZIOs?5} kI݋ZyG% {IA$JshH8JDrp'AD*ιYRm<P2Ծ4ϫU [pRZ#.bL$B[v-;ۖmζeg۲mٶl[v-;ۖmζeg۲mٶl[v-;ZhiCV<ޝ捗r۟.&~3 ?^3*u z}q^@AQӚW`u" ujvj aO 53+X2EA%6QHB%P]R Fڢ@:2΅؈TR$2 <#:jrvĜS)jblnHHȌ;]9Fș[qp^^o=4=kg!؊>#e菎SIrR/=S@ԡPO9lX L"&"t?#fbon}>ҟ}R@D x< (*-D%/kâ[?Ud6MDF  ee4hJYTdPg1q]6R!LY`j(*mQd6bl/x謯./Dērfe]ce'!(fJ\'&Dҥ}~KݠT}Abܱ=EV]@Ѩu!T)I" 7ۨK<iFmtE0 8-(!kB$^ȴ(8"FHmxT]Xk~" b1iaD-"VP0c4eЎDM)<*pAq kSϤN("TqIRQq5D%U6s:n vi9q IU-PHx3jg>LLj9+j$QjqT--Syޜ_Czᦗc_ xV]Ivi5BE8ɲa {o.MS_ozҠs$Ek[GWJe5u-\[t.v^[y4[]i+KVQڣf{D"puU]b9Zp&1iΈhsAʙ71Wa$cl+oGw2L綕c(!U h Z [Qh9"+t:o-UF ;s] `< C8);Gjϫ[J!HkM4FGRbD;pSwPչ+֡+6h(5:RӘ - gxN@暙qmrb ?F[.ԍa䐤X€/(B;N@&x.9{xz'&GZs%і8,db+ $4YqpR"B` cVP9[CB3XB1np`#DLnuH:] uϘ=ܢm%ΠD@ +9W9v9g؂7Jc1F1sɭ`)$dpxHH9.yу|QN*,r<Q:`kxL*KԞQ@RPƛëJazPl'g)Oc.*5Xfmo/k@ʍ c uӔH1(>V_n/+'m\詴#(YiGqX~J\$ 0>@JVY^?_(w>S/~of0#4#?TE!vk?V kcU*Ly8{޳1hߌg\aҘ8E9Jj 6T[Va)wΫ1RLwkLe}F/^A?_Aw(Ř*kpˬyeseVMT!i(xdIC0<`FNёiTԌ)t:sEBKI& HEO+1$h%9x@ig Ҕ^-@wEu<=KkzZ*FEp5^з < Rƨ89BG(()d%“9xzސc.Q[C Vjkf#zZHFB ,hbn|\'?Qmdp|9q: QVQ2i)R%[=!MSAǔ9>X}ԃԷ& ºZ4GCRvu/mndw ߃Ȗ6@< pG| ,sW<&i= QF r  S,㇔mm Y&;2xyؾ-e2 9rB8AezE#UW07 A Y1e[^Nѐ)PȌyYGg7x_N7jwEC"Lf'3/)6lǃQPΫsPV~nI?"!UMO_Eb9Jo+uB,7 [rq@k}"^^f ѯ[ᯫ>I,w \-q1{s9iZd鞌;|JիXzS0'ُoHr^߆?cO@c§׷>UO{V_a>]Nt7Rr2u 8:^]&~&7~mwN'kzW޿~-#-K"{yUj?cSλ$EݔEٌ9RқK5"Ƙ1Ȫ7%^_KtIm䧦ʿHY8?v:`ˡ X+Ѻ WmKXD00 PϏLBenAQeǍ~:4,vvjxddѤ>c0@U 80hi^l#>6_xQ^X<޼Ӯf]h?2< D;N QIW4WD `JJ'je$*wW|ϩ'H`)ѓս}b􉵫Q+G\{ЩǂT!\zWcs*QWP `do*KP_*Q:WJNzpE%(WGp3$zWUUrdcN+& )LN>EDcE9\6O܈x# Wr1|6w7%CqO4M*[{ɱ6$Qs-9˝R 3pf^{c$a-#f[1ZYuDAx>:Z+# !՟1 ?jt׃*QKh*QIs+@}$e*y_ J6xp#\I*)R=D9+{c  ɮU\pPJworW~˙7EWev=SJ:,y0:9os30sګCmfeMJZ^ (ylZmNS]M~SW_xWg2Aa*xr:ølK1OG pLFV_g˓ qĘchLS#%c3(# |d h$d3Ieyܾ(m!ĩ4Zl2:WDyKZBLRnqxɁOq1O̓dUlF]/!X(%؎6df m-ؖ8<fiI>lن-9l-b2<4rh%8: C/k`Af3ڌ28aI&VCIaQ;=1%!Br zFÅ"g X䓍/Zo'2K sHLᒝO2=ܬA+GDk ]0ά5v:heX@uN d)zYS(nVL`%Wgb򀢳| 4As8 m88RpE AR:zCXcd0)޶I [[=(.L-!F oXf|^[xُ]{5/A&0Ҳ1Q {S3rCEh}z2: Z; R̉;1O-`MнXeK ̆ m@ Ƅ$ dAQ ~[JBeF|3 sbrɉ{YEGPB]܁vBmTzCߕF#x@MACې@h `=ePFdBE 6]EZc$AYH\40M15hB`@J#t 0؀PVBEiVF 轩6Y1P(q 92jAj L,ygD)Z6A!N!eî٤GTc܋.Rz,ryVT0̫#lS R[JĀA ݒ` d{"1lP]vi0džI3)Ɗ) 9䏄A;X,;f2擎C:YF9'gi> dvv\o>,QLm/x !l L09Yh2X DŽ1>"*t SP˖Ld]ɒ|ʐt>S ֥̐MbXtLG˓"]DM`E @JP$jC 5d10RaXey@{T c+ԭo2( w2DhGևE,ԀGwf?oɍXE31xa;M}h 1H? e_9˻:ن)Sm6$>b-=ɡDC\j`=C^, @oBe刾 sM @f3L52 j`ED-s)h BV }Pt R(5y}=DՌXF?gH˙Ȧ td!nda6(I@B4#5\ ?AjDл(ÍY KiAXvXY%3 J];r[H4=>3iat53kE zi\zfjF~ i ovw@v@4*A\ } C0X:?y7}{|;~?xL{s~u997JgR<%{0z`Lt 38_yhb|6`= ޻V,VZk%ef+h΋Eɚ0۽7a(g3{x5B}EeFnwCRD9NHzɵjFF?M4EURA )aɵ\QHρOlr<#؀o٪'p6-:'V"-aK}ZB<$܈9B~&o(XV1^8Qr9iz ꠀ>JhrDbU#RuI` LAq1յNXlF=40}CVC\ zPyXgoɺ!LBb2ěPj@׾g6z:T}lz ynz*~A` Ew]B; Z&Z W'ׅiqV0=KFdh0k NR>0+It  ɇrH)EcRWIҔ" Zm0hCM3~ IrvX(`z+@B@C;tIxD;)2昢1 r^;[nGhc[rl ck1kφԞAs^6r vvg׷H:Hf’Uڟн;岣}xmrz<8w}=__ˋ7qWؽq J{dۃ3}w>ͽ-c#(~r7W  w; Ņuԟ~#{Y>zo^܇:^yzyq﫛[Onξ:~aמe&{Dיּwyvf6~kwBvwcɛq\c^nvGE朗L&mgdJiUvv9"(Gp*1AjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J W2\;UP=p%R®p0\eVs4\A;p+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕ*ل^J'Õ}`:q,OmbV WpE&f5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp/kh[ኢi=+My5+Q?oJp WzثJ WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\-jzZQoRNʨw.p[xw & Cޅ58wX? h3? z_=[zĴ"jJ_\OW2+]q*~̈́'OLWGeCq(c>-GUTܥ ߮8ЕMq-tlөӕ<(]}rjJӕt ʧhM +KlOeRu'nܝL0!} Fg??ph݅Yu1Pf_ l~jl^`q޽n֞:E0r9{#c@.77׫vJ_"MDWyy|{~ ~$ֳk/>Xs^n k+O9>nЕMJPt lWDW.ŵЕ<ʘS+ypA@R?-}X ] ܧqht4<eJWϐ%(ѕNin^M3ўޕtA9Q~9>9kl83 ) 7H9d;fd'7`]y4<$ׯ.ƕBM'_4vh(stmɛhRr%,O|r+;iע1\8u!1vMѳ\܇F{Gt z++&ϫ+h-t%h?upj% ]%Xӆ^\U>+A',(ѳL٤55li={W7ɞ:]wǡK^S[O 7=mTCTFP҉]#*)]}M6++ެk+AӕIҕԭ +AJPABW>aEtc46k+AKtt%(CTztEޤvu[6&{fѱc>jٻFr$+`߶;3]}zJ%R=`nbҶ\0JegR   ^-Ā1۱]@5:_;jZ[.L:Ihy((_-ϰFvzn6i@'s;, A=+ԕ`2C?LuCEiZHʐ׵Op?_?~$:Cz&߯+Q>ը-2WVfꟅA%O^ ~_},3=9I` 8I\`8R\cHyitqL;S[rU};j1n#HkĈ)%v Nl~bHb| ^37( x.dc/(P6@j<U[>!|sr_yv; nk %n2-qGzEsy/yZ8<:OƃCU?}M +Op7Y)vܧޘ'V5vޓ<*Z=;+U,^UZMр\LTMKjQxԝD2Á?ޛMTR[^}Tʗ'ԼU>L#c56t iߙOT#]pv>?hujRUWEWYWKe2)WQOfyʿ[?p'>p08_?jQj{L|gY\l.7f}=Ye`m *zA{OoV2q_3D>K!2btpgGG:^0:LGZ־9Y 0g*q h5.`FMI3VMܪ,MQٔY*h-yF0&|JK>HMRh1'ݛHڱЗt;Kl *ჯdݗ.R"%H1 ALKЧ%f_$xp}>ߵ?FZ\TE9ѹ2$_!g?/~ 'DuW_Qig bls(&T1 11OqVcU:%f"$0DBC$ 2gk✆ Bz d:r_MRn[Qݠ+e_Ꙛ׋'#/vJBh`t"C,FHd<9 3̊qy:CHNt'3ZK#&ly&g}0ͽw;:x,~?X|#fi8xG zH`AE\nhEJa==`܈IЃq8l+]O]_9Ke8J +QES_Df0}9y{Β}V V hך[̂I[9dxH6 ARty{s<}IggڵKCfEU'E$cZ&gf}٩* b |c":Qb f@v! ~׼@fBy$VH-7^Go-DmM5}6YEt5_1?F>7Wkz̞k< ؽ4GzlCey~ޢw2U|zBAZSHJ#>+QdY$ks) *UhXv!%dzR켈"ЦhHJa&#նelMӖ=Қ-lM3βM'[nv[񁷉7rw/ T8z[m0fb ^:2P #e,Y"*Bo4[&$ R!{v^HQ̦4$FE+\~%L)`߶9mx(Xִ"jwU.OVIĭ`eHZeɃH1Vaֆ$"LȐ+zZɒ#`8 $m֑"b!,s6~ߣKǹlyg;xXpBBJNI󐹋zE+i[  i`AЪEtqJr2Z%r3>*d B &-h %H5q$h=z|iɹvQlEg;xL1742,fԆS Ē ;DNqF2R.^=lM;. OxTvcAp]k֣Q/#WnTHޯ~W+YBe*eK^)&PǗ T_=K thPVLy:u 0u% +a;0 hK)|0qKe'[ 5mmUͅ`Q*KG@2R'zu mybښ^k~KEAhqiNá3wVvzG9)ZWU)Ji3S:JnkQVED-D$\|J,"$X0^dNQ6!sF"@FyiFG,4YՖp !Ke2FX@ٙPί5`0H ,/1Y]&l:^ ÿIsQEPI8 #j"^ƛM!--}ebYǼJj(xd8?H.wr\|^_N= ?w=91'R;vO^ӌw8ԃiGLU&++GN ޽..?'T}4p8.HXgK.LɁe$ɀaDrb.2^(}Vc8ϛI'Ure5x:uu씲7ޢ%?3|*SQF'r`1=@'I`e,‹XK5D 3 Ѣ ]Vdw;HpmP\r|nRAB#MT=6wDƸ-}۲tdkBzǟKuN;+Jfyn|n h.o I29 Y8ORx~ $^(Fv\33=MOgy5ˣ6ד䆗Ӕx IeY7Ż.3l/onOo'Vo{}˿LƳ_U\k:K<Ϻg5=:OΞ}|\{ozrew_?5opR^_̾{Y ٴ^u:n#(ߪ,<1ts7\ʬqqCӬ)O F:х'uu?lZO~z7_c ,9TUtϯqrRʜ>=xko)LB.;BQ.l4㝫mk"jg,Ou,ϚZݽx21oƴYOeʴbIAotTWlv5, MGzϮjgo*āmi:eTn>T]<˧avyіF l5+:TD^_'XHxugl8hȝdH>5jm@/0"\PrM̕(w;.^-ϭӈ3nEg'U d4c29njbEɘbao-U{+}QK~ӍޏJX0((e" x)YG3WY(cOKJo ESd7S#ܻCr򴎔{s1P~Ϳ8TkW qS[YvŞ)X(ɗQ"!03lUAIARJ' k{2!CZH,kofMqjc`f[ = MIZdwsmn.&bڬѭu쵲2aDDZ0ADJJo9NxQu|3t|9| l\6"F"D6T?[sr(P%΍&gTz#Y)=Po;;ޯK(kK(}I 2RӘ - gxN@r\[¿yez:7 #$ x-P,oE0(;8M|i…K: NUXőp.uXE0L*%6RDh (=j~'ʝXQv#.I5!\H*L`3g`hL1`,zB=&%w^;=b m%{-Ơx s áꅜ+˜RYlHၱkT V0b;8`ܥdN˽v/Kw׆FnZG(!xR!,g!_.T1е1T)b+t@؀gd~sVk/;^s%ha*۰U1r6?MUT-;S.-1;A!l}?_cckţkntKG&9Ò Ho9Y_&MR(ڜ-@F1OK&zGdtH˿sg4j׾,ՀbsŽ hۖ,Z&DMW'OaQQa p}F ڥ3UȜxeO (^5rn>>_FkR1&ENy=\߇Æ,]d*iki]ou IK 3-mgCNh4vPe,I\,^8DrBCvBcKBQ{vq8#H;i9Q,#` ePd=u TIO0X59Ѵ?|;k7KtrPFk" jA)p/QoIs A10 RE>ضHW"%1`+N4aQiƝV2 0|OikۖK?I8 POYY7YjM)W̒%>ٸ4_C1x@ Xʗ F .>roޟ>yuO1Q'/ON߿y`fcu$`|o~\N~ڢkT_5UlKͦMM~~ԋeV(7Y c#j0-}kũG:lYeAg~6?/ ^QPTwM;( vcNP[[z*Kl'\OQM !`* :iq-ӠYA;H DImp!//hbkyWG9@}|#?{.D%סeflk U";UcJz>J#ԧvG`]7ն+|Pjkr9 \mMXS vJ9 W+V{_f>LVՒ:X3WE"\AT XW&]J3+"6^-oO6 +9e|@_HwLX 7Oݶ{@xEVn"Ŏ=\OݫJ߾'eqšBrRKhyĴ2]\,?ۅI|'7bjɍXSύROm97zx_`+&%%] Xc1qe1zlE=,WV j:X%Wγ+[˕\ZSD!|M Xar}5yJf\ B ST;/arWP!J=rpz^$}gW}/AjmR AjbsWjԌ]Mii+ \AKl5XST%g\"Gg ]D]w8;8D'X׿ ^CGފ[o^]OKkխ\RyGiE˒SŋEw?]781~Uy'W7ۋxv5To^ZIJR5ڙe AK6ꦓ/ūéys(4ǟ3yyc&. M͖a> :+w,l+݁V.ξ'IzAI˷CAggt?TVզѽAr.jqDպ~sW7#BrsW'd9?.|vK9ݼ-ӓE _+܃?|OO=fc(uAci>Ǜ--vLE?%,70mZq7 hkQ.Vb d#j}_n~|Ys1䌄[l:"O iË.zj%͂fARIkSYs%}t޸'˘hy/Ulhüokr{_(eړ_ d; .rmyΆhG}aVzljMG:˞9μle7l+ՂW~xKĤu MvF=TϮ%̧5JK5ZMn#s6v˭=8d.t@d^su3Yx>?{hn臟v 3m0jÍ2fZwqyU.N9w . #qNjrGOO11uD t>)l h&>hѩc܆^ۢpR/:ulgD]ѻS$ڬOs'*G~4Ӡ ?(<V ;{w6g@z#ugт ǼMlq>v7j *̖}e 9K)RV^lx5hKWZ*X$sʍgs>a"Vgb8gzZM8 Nj4 Ho:cXn_ڭ`ʩV0Wg/RYփ+E,B-bn0qTT;*ʮ WKU X-q*quZātW WL0V܌ +[j殠=O /W &p 16\ Rif\jz/Ȋp=\)ީOWRW+-,+6W+ȵւ+Vk&+\>ppER*˽OHԂ+Vq*\ $ wǰ0#f&f񎘡u tWx s=Q1Z9Ui@qnsAhAXw$A`׿݆O3'RZ:Vi3+DE`Lb,`+Bϸ:@\9^b*y<\AҪ\zV0GuPquJi]B(l5\\Z=Guf\"jʮ XkL3Wf>HUT XStb⊈p~; j9gX}0LP.7b''5`||^E1 =mo!/|M&` oa걘U9bIdtD jpr gV&|Uyqj]XW,7T[Vzb4O8" q>5OF+UJM` Aq>\A-rpe{ kv5HW~sn\sv5L2L<);Wvծ{ {0NW-3W(EVo֋u]fh55_/~99=Du;mDӺ~&p޼ysqϽ>-gp\i߬juяUY?ן^ m`?vIg?}ӋobsZ>4"][^3\E βY%WuM[_ 2VJ^wxo[zk~[!|2`]n[]H^{j"+Yh^-0WCrB}e w{n28n^_ܡoϞˌNF!NMF#{7w7^a7r|6*cP"VƷEZs42F&բ)V&kZ룴IFMw}JB0wOU񆅯kfPW˓\~KѪ6$D́o^IN4!Rc#0A1B2U>7.&6$}"ItIk@lSF&j5.̒Z'ivol\4R<ٻtjr&WZa*Ȑ8B[UlE#]# HU-/dTJC2a%cITB3rpmmb1ƺF٭jѵ^i#wHRruR GY@4A&e`XhCki{̥/=&![ 3JY oFٚdF7ImnёDI( pf jw[,C(!dC-aZ]IexPQɦia*F`i#Iƨ2:MuEIW—{! ^'Hx0b#ӘMJFm֡7dhi($[4v^'%rXssɣ*I'JӒU9XZgDA(IQfX[:xZ ȵA6Q 3OXc%)HӢwc%H ߉2Mї`Rca_!GR: mqڀ(̘bm V"krj|H@o6LpR&;$9L[byrcȠ(*!bUJsݘb[ޤd-t*x$+#IP2Rlp`P&z\I2)ed _Y>댔 ,*"l+G.!#B;l#3!6#UC˗dCqAf2aoQ1j$ ȂDmmD&)d8}jUin1 Q cGP#+K)pC%@vfS.J@(5( r"2Jp\Ttgc@F\`#(#mhԞ%/Ex@Ɇ4EBY'\()5(HҦlP[eWݣf]HJ˒o i&SBh 2mM %VcH e> Iu*(_C U+c[_6޵qdٿB.fZCfd|YxTWuˤGX }%)LE-Rfy'$SU=N&236Ak\[bED1MDr|T1&PT6"$% !pBm>*yMg&_uCӚMzIhx[ nTA]x{ڦ' x@uP< *YhV2M!rrIr2Bfa1%OpHv9k=/sP([ąsAW) >@h3LbyUSkZeC.Kt(^B>yt_$kIu3u<o C`Q*YȩY_yU8}/[ߧlP^n =Z!X-U^mW6evY7 "휮hEǮ}3t~ \n\|>ߌn~Лns{ffx?xB*./A g7RN>tח=/&-n. ėж6CDm.&5_[II=zhk2B #wW59 2Tc0 :E'P'g'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vu] Īz@M5N pfRivH HzN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b':D @ Z@V˱;:E'jv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';N \]Ȑ| UZ?3e3N |&; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v脜@GX19eJKM@sוj$n=L :Qq ǸTc\rK@"Nk/C8UCW7Z hc+BiN *6Cqb9yvigȧx3-訕"^$8M]Qt+ez=7]~Ed;ۧ1Zo |v6G*`[cGy]޺#oػ+{>FuT/ehmcRtMLoȈ1~ubd)Q}ghkI7&شMh˨waQ߽F1ή\J5N^.L~N7go \?YOWaAvGۿe~[-?}..2B GLd"$Z-(Fuv/$䖩b!n6M}lkvqMz-yKh%)['35`/B5tEpc58Іc-|2*tutEu$(QW^i)GiU[}F껜=H~=m]܇}9aD\ fM/޻eTZyWjy;E.۩c-:es >׃c#rˠ{L8> }zV(|wǔq1C1u{S}tJL$pxLm:+A;i)ZgTV']5Hc?C(C A2<J|?䌴jZl486yԅ$O(1l1 pg.zZ5rPjb9GU|yFn_J(BC Է}cPn܉/( ,6z78m-$QYuy_,'D]ǫq`"D҄Lk䥦H`,:+1Zr:猛;]܄N(꼉C׿η*WEU\ݼU(tCjT186x،a*`Nųm[[;>ߝޜY`]uw㳛0''1?NYVQ1O^zSK:8WU>:BuN=WuStxVMu}!r95J) S>F(q_+jkf ȯ2b̍|:oZQS`k$LBGY )QH ժ"nJtU|Gl?f)j5^c7tԡ+*+vWCW7Z h3c+ c;2ЕV"(QWا C{3hY ]cOCF_ ʱtm6ȭr.Ip? xAq9Q\1ZwPv[ih t-hRc '1h>8jjfpB$"V1] ]yc*+6Gߦsoڿ{ qtiKtutE`Q]ࣟ,7.9++BoK+B Uһ^p]\kj+BGeG ]I񸡗\XO#6] DǵHZ [Q]1"^BW@ktE(`:ARΊ] |5tEpV8v"Z1] ]i:EWDW쵩푶QrdsWLW߆hX%Fwh|PƘ-Ja?kZkGxL<՜{?ld<?$xl+RXzE-U fU tSR [*c4Is! BDg4;M| e\M*[ M\S MZ?2ʂSiDt"rA [YYRʇgښۺ_ae%7U!ck*IœyYOx9DD9IMC>%oh)'HktCJ:#2l+ N ]1ZW&Qz*?&+AE(A ]XY(M "]ht@#kxW}t(;e'*]I+z=TF`TYpU=HDYډΠ+]깪7DW[g&\RmJ+Fj bΊ+jyhQ]1Ғ*]턮4ZAtŀcCWj'-Q:U 4j-A˗:Y˪e$AK:ą6Ect69A>6Vp}Y2 -%bq> +Z+<֖NWS}Е7jN6GWxh+ZJWHW3(ps>Fps>h!NW!U4Iy9+:l mşqCW&Qf@Yk֙)%e,l3h2Tz5X bN+UZtbW:@2< "A.:)th#NW2VN XN ]1(sP:]1XC+no [$^n^h | fIY.CLߖ͒i٢,PSڥ`UNo0Ka6K!#ܥ^))U=厕96~-O7hIZzi4"TZ,QzUiiaDW-.h)eԕ%m p7EW נbxbջ:H 8-"rzeo0NJWCWѹ`%]"bN ]1hKҕDÐ^//ns=V>xPJ$@]A^{b!H+BU3ei*]턮%yW؃bH+A,%(U DTV]1\/V{W`+kAb@bI+5Db -PlP+OΑjN+%&hM["FuKt> "γ)b fKh](eK:_j\}CtbFp%^ ]Ծv硅o1Jԕ$\$Q{)t"ϥt"ZJW ]aq4(T,ۻʃ {hU( U7].e bj]<]1z>vt=m%yW).)th}e4BP!J+CW ;)tit( V:@uP'RЂҥ47CW.Qn g$U˒yp7Yh](\]JWUևV]`w]<`K+FLDA]`TN ]1\P:]1J*] ]A ʺtmB ]1\)thNWc+nJLLBy=M$rG-f\bjRNG$GZĴҬj-k5sZyE')z7VсtJr{3Y̯f Vv|mN0ε~}4M1}?溙JRtG~,>󫁝Dg ~}ikݎ59 {`k]U3;k&yeGG_ޜ¯Q_eov5? yiLd4\-րOO'4e3n|ks5}1NvG &FWb#hAڱG%\ =,g+/,K'D보rpSTnJs$@{?vfUgV$G"=y 1ݪpq>e_8c_r@Ju!N]6I<@L|n:_#>]fйu^ 4"|M!aip釢Ϟ']{\,s'E^p5ӺGsUh0ޒaUElJS%MU'd[:B%_j,e fNcӆcfTWMj ,oPC۔Nj3*Vt#9Y'>Q.?TyHBH~8nP[u86ūo~z;1E3@zbv]0±#M"5 Lf82+ ҇:e y]X.g&Iyau/ul mɑCg?;ϸ~>h#goN_ro@_-Fɇ zys]{ DH" UT֚owqDMBZkA7~)|353l-JQ|cҏQ5c5ZYl҇Vkj-\csnq5KCqJ{w)zlzR;Vo~/n1;N|}+ۑeA6"[DW׋[MtsFk:=!zo+&?6BDW3 ^7S; *YT#fę&6N;aq1My[揦~$~;_/'O=oWW|pg3]2Y5]k_i^>`<>9lYES9Ȓ]4d=ljFx_19Zo&|C%D3]y;, "7|98pb_dE0[c/;Yt.TuJtsM;a1uƫH4`S逫d=(rd 3: qA) D0xx#d]p^ʲ2!}Ѳ@9s aRrMbNb+L*m#s%%{y'T`8;Oikt>+$uAJJN"$&~-rPHϚxCP#tWv8wߺ׻]]vvhe^y§VֲذR?B sf:Qi (*C28z`%gֳVD .^qmXc7WeV1]>4RC_v=vZk:cruؑ㠵 ce{t6O=̦$m黈,Nz֌:n,kv:.ҥ;''y^;E]ʜx,%v%M׏۬{?EѺZn OT~jE6#؊& R@~XDJEUmuO!o'ˋeIwb~+G0UU2 @ekў,-Dk4jxyGߩz\x($].ΒPR!1~)tļQʼnI`:Dx`|ѾxbRuzk&rkelrEsR!JءvD:n;%D=R4rj#xl[#Np R%LMh7kt+ߐ;5x͌l[6qpS#ethY0SㆾHe aKwil !MqkC\_yfӪ9g &a')isWNWgl*0?u::=ci5b}b'"~H~-Xlg$yftd>߆8aĒ*\tQa '?Ojkl/3&IU< ){W0K'bY^K,mY'3%ltSFGRSu#XϚX">ee3Za6Ÿv9fJ&_k:&+g29Fm`-O:X}kӦϸY᭢xZf|>$ ǛDǫO҃K,m4b&ir},V3S[Lޅ`bQ#lۜ+w!Xc>!feA[Y&FEB0e5-g͓ہw-%D;Ks>kb>t}g G0>l0V%diJLFXvefJ NLT_l3uSҫ:lAҕlQ%owS S9~"Sč]z~q.+\L P˟#jf11nkp@gdsUEF v1Hi%P63 WM|TSpTc0BbHƠDs;O0v/€_"YZP#b]޵6r+iwy'k}g)M[H=_lUd(G]?YU*v?>GqZ>3Ɏ%Oԥ"\?ӱ`QMvqmv,ۀ.?ڋe)/Kcv,dtYc~0\gج}[[դs?Zı[Uܳ/(}?U竟]mN^ϑ׍X|`D}Kq_"!(5 M\t9S΍p$F ٷ!TǢ0t=aQTX~Z;5|#0{C;ؗcY_ڸ9 \u")h'S< p?_ڌ|:y6|mO5|-eT9۩ֽ馟gѴ/}H h炯nn&Rèh|xVD4a0̋*Y(C E4wG xu,x^~:ΐ 'ud AC3BZ^^Ť\mUr:o`oP4E&B2/uj"Vڛ@'^7@52Yc4 F'1!:dz0zY1{qJBH81vLM6('Jp8G/)%*<لz+{!8:qUCFy^\7OILjhF:^G.b XrsP{f % PynWE he#b T[*oQEE~g PL@y>LSoVKϝBs>~7enͼF-J{|XaHBeuaFi@!t`#-CXP{OvύVAdcj%]h:= Dki V(h߭C{LDGSYSAsD3HG 8z_<,Gv0y0 vr_(1twyy2،% *f$=J0VN˰[e*!=Nf"NyDTX +9(1>P0H,d  #,:g&w-JZm]Q9ڽ})ivGrݨ?I9ùQz2@ ^ fC/$ >Y4~lx1ɍLVJ]Ǎ|qÀ Vd-ox2Ul,Ws%y9IP Ҁ&T^`3H{DRmԦERvS)t^DuW:< z`t^H WRwv2]`+5hXֱi4Kʧ{<-şYۀiDW+&5%~ۀFOOڧS4;eS5s~GSq؉w|.,жaem)C0C؏Up?Vi{X玉:,9XIQnYi*:Wn{_kF@&-`Ovk2 vhs9za""3dJAgAD0£YFFbޘLlKw6[!=[ۍ_b@9ARz20LWPp+ʇM*P#\7IWA>NfmP27%)%эJa߬xgK㛔D,O9;&R]FՐW#5-ZH!R\I2eNd1*$d0QKw宨B_ _γ؈_ͫ0p|2ݬwG?m/8@3R9 BNG(ɣfO}aFBWX0&]G8I)#ƴA1װ8.,9\o@>D+%d|D]n\'>ঌ4Lzjzռy q7(juƭtqzn5Lء"LreO|SQ 0kji'o#7>:zRt4&zTROۄq A{-#v6qYj-rB7F%wNF q ?FHorFX_qHW:w~4"sA@✻b \-#*Jb|ޯ^T|!ljwx$*;\~*EQP@53fƕ[+DFEiQ׆_IkK?Fv|xkFRq8mGn.0';cLtXߊ i\/&/oL?pV:#J qqt(ɼAQwt|%M`Je 6zrõiƼ(Pie!(50jaA9<@i8O{kӏ epdP+bJ 5c ;y$s*tv\ E |"P4Rh_ 6mLJ n,4:jJΘ_/ɼ)Y(C  ^}GK8 h炯dEDjfYU)Ҝ-^銸GQr`s%,j\gh(j(kiq_6WmT =aֲ페=uMɥSn2Qqi_v~ *Kdg u4ofُ0 G{]eD&.-zh.j.1pK3s)6s+d]ğXj'|?nYїgsmvcfvd]7bB:'k.0G}.Ke0b*P},p>@;E҄I h ~œ2̉Il]6A6x#' wzbDP8LC6ΜEK F[z]6Z9HC77(;F2}څub1xFBз'*t ,1$@#:pILFwӃˁVΛRvŧJ{ _5fۄ)&Wn( ׵Ypsp.jhe&R2}X8T>!7~p 9َ p}9{A|3vF-q `i6p>"DxU&}Iy;"E2,3DxAxpaDnF17d lŶj7U4+& PcR! kO`r\}u^pC:ZӒRFTIRhf2*pɋ7Iۨneuzu]5:sˮNǖñ}%{U1^'PD}KZ5ٿiQrޑC= ]#-EchT.ԫB/ܶwoEkǐ|@UEԞ@a%q6KBVQv,nP)C.ݹ5>WGm%(⃏;I@p@Ɂ4]û2 +šЖy(|~duFPxNq**%\)5=rr\vxA.IF5Dr3UhHONl q3eܡN'!VK7mtgTa`j䱪B<\G84)*' UYF{TX>uy={HI&SWRSj+-@TDh!hBX.8H\3GB ۄwČ2BE pu^GNP+_:4+;pY)(43q%EͅSQN"HˊFBhjUвpWq`˻2$Ep\OUVK=B2ݝnp̸8Zuh2<غD`Ӝ,<ԬxxO&OdBȮ{'ΎRxwV5[1*PE4!60F+,][oHv+<%YԸCdwEmٺ)IiE) GSWUJY?-&8 η1/=c܈wߎk/]45ו$};z^W7bJcxE` rIB&4XMrL2-_ˈ>/0.?(quh3 1Cg8#zBv5ݝŸ.jT& `H;tLlk_1ǯBF;/ p 'qx Z4Wzc-U \ F w|uqc@EEp[feZ4KwEP$0jqǶ^3o>JYVȸd%͢UqY|>18} y a)FP+d!] ,}K)a|I]%+TrV'Jr'"6b6Za6,>^ wrM,~c2/F&ٯ1V﫯g/Xǝt4_:Mf,{ѡc,:L^C[}'623vN5-&UH@m>}6+2]n WX wp7t7V.@ܷzy )ѝG?kma2zg\t(q8BU ]t7\V qC鉏TlqM ||jdv1 2@(ٴG YLUۋ6YH B_3Q1'd6Ydn՝-6t^#+JZgv4 ڌ@5y>"O!L)d&iH)h3GL&9~L@+ չK?;Z6ڗ!à7v,cYÇ( =3J?ƣ/^osAiu`00Nt`>YNbll8oP L-|Y)3G mmu7jc\&$Sq8Gp+JdyFHmX@ #}hb,]&Fz .A ZٌxkNsdW2CH{Zk;9tAiEUF0yQ7y$&rPԆbSSf/1r{$6Sƃv2`ٍ>9*>B9"'҂I'=%e&R>xoᾁ74Ec~ʝj9pak\! ?%А(sVS94yxbIFRyx5fpiթa`ƞA2Vƍ_||=mτލLdr,֍9Q+#_|pL**X-G!4;1]^xK+~ar A$8M2,`V!cS2 vS9z(ϯFvsXLIyVwjjSaE R(EIMGpFIR%{;gS1s?%`F=Ff3e4//uꝅ vo5dpAOs-˽l9k/6嬔S`.Sa9oJlh=XQh61{i4N!ӹ@3N]':|.Zuڤwnp/vZ5Lѳp窆Qğ">bu7vEu.sU$ޘ`~ZwWZJaHz9 eJIHE`^䎀Q7s4O\ipO0I2OS5EtUz!.}_! mVLi'7tdE.o",H85n`ڤgdm K҉Dz _Zk7hMO=Q ǚ58q&ek!|,^̀T`;TS͹Й&S%mMemIre/z*7Wq-9m#f1 }ZJU!г~ls|[/Ndf1I!HKe>ĩG:I2ݒε-w/]SPRZY)2ӢH=AS4CV+ym*vF]yt-Kd0^lp%$$[211 &)eDhvʧ ^|zzw:0,E~Z*1i-w'<Ȧe$KY;iH.06TV]k_sTlf5Gl^>JXC x'L-,ͿCF~v߈r{dVh Ez?>f)dƻ0P~"nDM3nVrRM,)WtJhPRF:P*nq|!J$V YkZL`d`~~_9v٣6iMpYO{LL|]0,'" 1-KhC5п8<U0IˉW9hdkM$ht>L.3m"6Fdw_Gl30@dVӉw-X]BiVf/mc&EʦVD+"Mr']esif!2V&R"MMYcsM#˳K֯ *() u /MX *{ h6Y. #\/c.xcS}\F1G^ v\RZP/*S*S.'? ]m'ڜ=Uy :vt50]Q3mTmdRF2Gl4q]2N}2wML *vd!D~<(`"-}믰Oo.qR^ૐqA>h g?EV =kRJg\c5f(V~;~!u2^LrlNn%+'HhzSZ~/ ]40RS$ m's8_ToԼҳwœS'.EuۮT[J.ڟRd}BdYs6UYp`V _JL#Y?@D*Zi`,:өЧ3SJ8:uw0gh:+d|)NWZ=2=2 RVC &uU!rXjv D{ucS\0'콻TBEwWnAdTn릷Kk?LBFˆiGQVj鉏jp5f5UȸM\g6~ TrrdERBk}f37ǧI3q[a*V[*иdteɀ']3V]-Û訡>2< {6ÂjFyɞ2b h5qja)'}^ߺLݨ8eH[qWoF_1 VVObx,ZwC.=w<\\zWT?tԝڒvc0*~>{`(~Gm O~n⣝N*/}iyi `\U,9ѩOIiFQk0RM†(jqTdһ,/TA+CT@ZS'|6үMy^m S lo 6Fx-:s*֨3m\8v7օ-'>0;K#hwa*OYʽ ޛhqcA R!cW+ m ځ7u$OX ݣ6i59BJ5قİ>z_n!֞H9g2yJ*dlQ!⥔'ϯv*jF(ZʺC1?" Bƕugw5pTpdP8.QN MUw26pdsbʍ2Ľ؜ZJd9l 5B>*hֶc-i6g,_{Vd߬=JYIU&>T{ܤ# D_6qfare3Ch H/5=`E{zC1CHt6˲VOO4qO8ijL$D3&!Xn*M25+q+pƦkx9yx~}J#`Gh$U9ӷGa٨%>TcLI";|09b )ƗNy(\*`kO˾I%ĥʤ?ػFr+W l6C 3yJv-Kڒ} PT%TŲ\b? FC! |jk9U15u{>TFxb©15U2/?#N6/`ٳGpB\NX.b1 !ISVw]NBXNaB u*8W;_0.}_AZ{XIk1rZ?M߿Av 6o0cͿ/w-a', 5ZѴP&EjtɶhS#Ra4·quL +]*'ͣv3oOVyts$),G\ؼY&,j,0o8j"b=#Ccf[ [xo#EҼ ~̭Q;NQ 6У ?~O5!y xkGۼl]Af4(YxZg X)EQ'wcJԩe fZ8q!x~7y_?}mt]`Vwq&TBmvH[bf+֔)b.bF߁CH2qIl'&C TqN-EpT]1{c>C0Ap|=<4iTfTRmc:eڑ%D%): jp 4$X.Hl'(vb#$|qql1T_J5(0D&'QN'p 2`4bq_ fyyLl;N,}Rph\k R &ch6[>yA]Ud[/`n{-%ֵXvs> A/%˲Ƹ1_Xb2F]>9R?C4)=ߠ0>)Hj~`5&srSA1kd>_/ޤ65]3 K+_mcHu'c|sn]wlhڤT0BV(/y%s#\ FF;Mu;.108V`Y{xi[KKX1<`t$(vjIP>ۓz [nIěe6pVxv9yn;Tmy{BE*L}2 vx2X6U٢M<. DmڶEF֚DG{gnmnҦBL>]TyB޳fх {Y7Y'mnM%q*my$b2y"HFiZNYL?Pɻ%2~SumN`—K p\NI>X8qcxQ Lkq9E|#kNXz6~1/W>"<|s'sBm:M36&Fޮn9[yrNj=˥}*ilP_gZ˺7иZ~.0;n;@G Ko 7ֹ>Aho8V> j#PZ%q mV0 aӀ$}3[dƗvVS]>KV߾ SwmQ-­e->_.Q ? O.e@s={٤;#Hjo@Ӳ*# eV 4đ+K^U[.VMjcq:| w1>OQdI+ZgӍ#~* OHSr0(Qt+EZJ*6(dG<1E=΅X,:pNv(ބ˂ӻ9]"7ctm,GRt+Nd`gƧ{,1:)MluwtÑW}isr-Y4l̪J]pJ=?r=^g90[ۿ6N'5ӡz?BqNʩCCb` .~;gq L+Ս`<D3/ť}*mٚyZ:u:D%u(ͅG$_yvÏAIӻGgu1@yZfmV8&簿W䠵Ral\aj#Sw-rh,km* z/xTE`' 698!J]u( :|;]'lCݨ*ꭞmBT2vm0uANot3A 'Cv_ˇ_rq5΀L f_װb>V/{ly2G g&l[}杪'<]k2=K䢖}s1[b{Imط;tUj D:ƷGA`.L\ /n{/| K_҄\l`!6no=®!*vQg ¤/@ % gpp|zo ,DޯOM0kܽ)}?wu`啾` =s0`О py#zFv,Ak.[ =f;JMA(CfgKE6om-u&ӝe}2ZLIfCeZ|@j:acuJK%\l߃ũ9NmubHgf9U͸y :~wMZ+6I 'F&Y\D'4dm?L| 7˝ؖ4s$ MJ fb*1ƯܾugvTYnO  Āsh?{-v4\#ͳ͌CTps8iX=8p&ӈb*&yW (8~/׷?Nlځ3>ً?ao7{vNͯy\=Nzr޹ylO)ܞY/&[yS\mj5>?Gt(+'y`#SSCY4EGr͋h4@tE o.\=,oHUTYU)KJS0͐y4Y"7vIܞE.k- }] ^]®`IMʥRkeH ^&$^W6VF Gm:N)@ pGDҘP&seJ`QӼ2)`~=50aO2xȥ vܶ_ca.fKT`Oщzo`_%v鞥źI#BGT<3}8(/Cٳx -% ÂRgq9} ^$)%e%|־|,g3>O}{*:ˆk_`BP v) #&x'v5u S5XP㘊މZg/ ).Wb1 .& _;R,ġVR*Hp1ѧP4F7=~$Ld&$51M^ bsfpNM[%,3PB#Y|^w\vHlggn0flvy S3G<@r 7IYF|} -'|XW)lWiQNd-Fm~-Rx_|\7& 0u3rk/r<{\]LiK_?8Wػw < ?]/4ߍn3xJk+p1ҼtkIp/zeXh5y 'ZoZ}uX #Znj*ĉm.ptrI&mlj# Ԡ7V;8~qc'hBKS3s WBWmZr_Xm&3۫38;P$8|"BYe{_[7CB=eۈEŨ&7ly!Ip_g˻|i"} +61[~"T^d ';̝)*Ɛ;6UGSNb2rE\Xqc`JptγuW-ec0} B{sB$Ae4wl3q`⧲ ZL<^ssuM:Gx/Džx845x8ֻ"aW@S}!R\MctzQ]d?{O۸_>0MvldAnc$y<3oH)>LfشXuWuurUγB)Ӝǘأhw`}Un-P::N|sҩ"9YKta6Ey89 tGVWz"tW(_Jgeɤha,U8K_6֕ˑ.MF!e m0q `EOzZ_Ǯ[2f]%SBxU:ƭ$2c=d;Xs.7`|CuGZZD2hfK-o:,r%[%F٪!HRFn #zV٪Z&ܘz*i%»*!̣TJ:܎ l^whAN[o` @A!S1[ b.ȷmV e{]X)R}fi=bJJ^Є$f34M (g -aEXȻғPZ#4ƬoJ^[Ocr)im,t(JӘcs1=mNMoW ^΁ w2]H$${scN S )yR)FsM$yÝGS!_9efoK٤bNf|Nk\y8(~Gx9,.br|KD%FZL|"bNNJޟLUv$ZmѷqMKK}#g)~q9rm|9Hݟkښrt$4bzZF[kjirտYo0.A_AFrf0KJdTQ=Jd?r&tЧ1k[kX)K}7\1%oRU/m$jPo0A.9sY.Qe+޸j G:%GMj,!}&Y{MZcq-;lGƑK,^m=S._͏/ˋ2.^|;v0#*w΍ׅ3;t3<ѹ^R>ړYuҧb#X\K_՘^l-坯Aެ&-"0}/aEqf_qn_6{{@8H.pVơvbݖkſﰭs ޾|GԌ0txJ=B#T0曀%dΈ.tFeI-ϨJ>^xvҘi @\*h{yn "E c|~Ct0ϿyN'~tFy3Bp cՇC7}seu8'E Ҙ^)M;|(~Ne+VE >w؝Ө-s_|/v1z-CqD0ǜn⨇# #3c5\*ƀ\h?vyN7[((pWgQ;g_ Rc-Avs2z<9zgRԴ"?]\` aYY˜l5`fDbe1h%;TTu আG Jҡw:+ D9Q: ɒy-J` Q?)9[ZNÝR+5 ;/FyVդ-9FR]*_KI+➏IۛJN(#Wqm A j@wXE+iݞ_ͣnovX=:COJ9V Ssg:=7QV fq=Q%weCOQ{q( "* rJtQrF݃¯;d2Xm=8Wc;+HUm8F29zt:9mk_y~`+µo\9$w1ט}[L/cKR a=kcqDxc2 [#ݮ5q# +b3񗧗LK:挐Iɂ̘"`FMbc:bָRJf?EU`R6zi%)f 0*ɟ:'>RicVOb8Xf b1%U2p3;-M<,OkǓ?OQw"HHċÀ5dDJiy"ga^;k~{K u8XԾٓx%>:o;%9:h' C~9'ԾЁ,!}@&)l 5y#`l:ճ0a4%:ǾݼBj5tzuoӷΖ+AL5 }Cf25cc91H *:z6_LՂ\t USZRRgYM:+ !w4X$g71W~^Lu+2n(yYp7R;#q#Dz^no$PR -Nl{Љ1c[#Se+!ԙ%A<~vly-ӿf{ϙb{zwmYxyA*anAeZf%0XGwbəgNEH7} I̷/)4 % @匨Q2vݏ!% ƚ`ł6)1hdNdqHUK@%r!/ҩ} P9wQz $m[ x kQsڞvbIխKT%^hc4>*}WϗUm. ,䁘)`L#r ƚ1g׷'֔滮L(y]C7rx :jhGYWf^KZ.oeIѾ!(D+-Ԙo2\kޢLܖF2aO+#>hrŹ,Yu;~r:@ycXm A6) 2vq}}3~?~0_BsV\዗ƠeX(0X^JCP"f 06vβl+~ V 6?99Ru[@O.ہʕ>8ެl‚z%qI50偓$oxR|ݬ p?? +dpBe-ʠ>o7Re~z/"'\BWe;;me5LBfHR,ro3 8\W}8!nsrI2R ,GN W2RAgH1BgX^ K'E$EįB8Q`.TA?[5$:%{zKX2  0w,fJ/`l"eɓcMnu]b )ٽ; UFd4'8/$3fIܘ&FyPƐa 0$jPȚ&MNwe5JE3Z`E#e^u>rGs6B䲎Ghi_Ѝ@GOE8]5~1,kFUq>o-3ڿ}WWWWravvqq C(3UɍJ`&iq^BXqzL}IẙLǟtt<0 p9[|R,m e@3[}ypS7se6D 0~ĶDIXSJj I`DeXSĊ#v7H(ƒ±NIЕӛ-ʔhsx(bF1`XR`<߹M\{<m湱@OB`}/{hQ6f%Dmt/h5-qy$^4~FW9)HlאO^Uj rG_ϿԹ,8tB#e6 5"^WM[tF3ܴ:O3^)Hfvٻ8W =[ֹ4$,0@2ƹZdE#N&jYժn] ۑJ*}<{×Z Y DjX;ҵ2 uS9ًb4j x uUCaĊD}Rshs1DA>uNmG''Ci}!r) >(HS  Ԙd<2R&)~=Lʶ6iIidin߱骹hvwLj1Z{sЙ/tE'T . PP7R,d,4:ŝg}'ׯ"7DZ>bq7$0{~?B( dVOQmXԾߛ:L<<*.}(/'2TbPΡ/>oKW,7-fӬ`L~ ~IA&*W%,Q+\u-̥V(^5B26:eQ,c? <6Q9=Վc}˗mci$99+LPI#Ѯ\ZsK ߮.͗o˹;zW;EY%bsdHoSN>EKEɒ-8r M`}p!\\TF3t'ǙSNn쫜!2G>^/Un .8oů*ިyn5CiP!ےK2VO ty$g.>~g6]Y>ߟ~vzP\7 OuOUwp. Y~X}6v*]Ԣ -M0SSMTSW3#ʹK-jQuv*ځnrX(F1+;g+,WuGϵ{Bw^I7r?N[N?za<+a||t> F<(ȝؚ?r/oL*x,c2Vϻ rDk^?C '} #'~6峜MAp^EQWŲJ`;gWCϧl$Ԝ9u~RCDߓ]u݌YERe"Hh+wLEI(ѧ`"Ț9stR#¸>E_X4W= Ͽo׾jLpm,|NTl.iBz*1% vԩk{ -'Y~WP+o̒o][X~*ǏY =8!^>N.-㛶x <-xTELY]ϝ6'XC>Yk<":}.Kw9BR٤[[%#䅌H!)A<?DĽeן{Ï`Ay cI ];*C@X EQ0j!cC.`Jk0_2D72w (?wn~%-~%-B% -Zߥcp2 _iD~~ٟ1=Is0bAzsG~1::f1׬5暵\~cu5T Z[fΐ@Qv&WU0eRΚ[XAc?wտQ,ZںY}Y{>ɍZE^QSJ~UP{FEW]L{b) jQDe8؞5a*T$ә|Q/X+֡ڷ~GϏԋf-@{m}|Plurrw-V6 6wf:mi-{r2w_lZ^S- M>QNÔ|}O 0jvi뽟FG9JNPj%'>iZiv$( ٤n x&fESSFU ,)1\vke N+O@}Ywd!~X;+ڒx5y*N5v=ˑH"gJ.IscaFޤrbVB 8|U& 9daUmXd2)_.JtXB5oq#1r"nnj|kvC t|Hw4_ VhƤmop]eE>_*Jrs&A&x["|Akt)JZ&ņ rM$(4ƙnAuŭ AEAU*B՗%9.IG%6N- " X]KU;f\!:(du Z)bk*eY'\BBBS jVgL) ':t%Ei:N[]I5}ڨSkcD Q)%B.Ա\B j »I51&uKʍb>'y}ϫsoFt-YM/WƢD-ode2Z8 B@ "UR!x[ƤV~l]Vb͢i:-W ߁tRF+TT|ΩPr}Y~dgA,mJJCA-Vdkh!HLeة,ۢS+h991>w8,O Fbz(ht'H<JXT-ws(rn ee MSޖ^&ew4^a ߶d-&hwk;)}d3\!E >YV3ѐW$8֧ ':wFb"vJܸ`[77/Wk٩EH'7Em8#su/~sc];<1?|6v~V,7WOK?kT~?>ȽObWϧ\TQ;Cׅ"|v&׷\*9b&tm#w~7zNZZDN{Etgv^ F Dv}Qfw'>qC2DϻlMCmUFc*eO ]T $&gZ`*fղ ;_3y{", -{xeT*6zBeno|LDwCz>wuk^fuk^fX[USPcR*5lDJUS*{qLS7ȉwj%) ,&}ÕZ*hP4H2d݆5.ʼn;n^ٶqA*-HKERj@I"uA\V& {uΗBDBPKS"k:&ZET%M2GZ3 sbA LV.865%>2Ԕ]؊%7MןJGAC-ր{wɮIsjY=<8ؐ,,A^e;qGE{-n$~;qGM=`90VO j7N+Y ?:[OV1k5M9A/aN% #Ҹ02ps >ڲU80rNQ 7GZ ~>tbKĖ6.S `kj,=w!{_T_0Z*f#6GAVJoN0C)G^ueãP'vmߕ >ٍPp҇wSv3GAޝ0#$@@\%u҆kvr:vf^Ή!1SA7c4r{V/ŀXWsJ?LPlSE'fVLR_+v0mvSP*V@7n{QϏGEhw; NJr^ڱyl "WwY%J&'GhӍCq!Rb/y/ؖPu} Us7 !mfMt37=^9$^_K_¥!S\}?CВ832 VPm>p˺Yđ.>Uj ~Tԭ ª@ZכpJD3v .%4& /bv=lr{]ǝj| lP. h>я5>.vyqdvA{ ]xCϯm 俹sOS-Kw"ƫf,έm˟jsM//zXnzݷC^}=ո]^psS*2 +.v%(,:( $2)<)gʶ̙8gSYzq9(ISnVRILAs"eS ":vbL%(XJ%~Ѥ/[ SI @F|(1j.dMm>Do(s.l?~y XHƂ܃z0M}&H,py/OrnnND~z l?١Y*oz>~HቁZ)!ۇ?ڴIx?/])dɬ<)s{J |;C&3q.1i,!6_ǹD.vk #g X(^(BBd?gwcl{SEƛQ.lBDzkzq8;|SVvF×/|huM]G w(ɜP\Q#uFbgQR?V&К]>,TJMKZLDtjk 9R/t96A]QH]%]c>ZtEH='\zݻ)Sqf9UpTo^ܼO&m5.8_?&zx!ש7VG",o7P,Q' օ G", 17ND9Ca0È u~gl;4}`'y}HQ}NJ,Z~e8 r8`8qgN/aM™ϐ\9"t6%*L?z!$[_4O_?S9Qg0w\^b sXֳ;q]$'W`nzGNV.Xon"5`v,NTN݀3u13` <',>Qb!shgJȍHAbkr x]B#h{5բ׋o~;lY0X*o}{rnNnԣյ&Y}ʛnWkdwEzۿFJR/_񮞗e/|+]!&?u{8|/>z9=`u;]ySl5{ſԞeǦHe"IݮCi2SC7G:H#z"cКo Kl =7X b{gp+f7sSy˱| pA8N%`tU CP#MlZ1s+26ω+Țw3Wfӧ&^xFvOمiFvH#|tʻ4 FG lcMnH: U]g(jT}X@ irl(k*ɈgRwVSbPY$E@@4%ؼ(2Vy%zW=*~57`&.% V6G;H27.!] ߓ %QjƀmU60=7dl [U^Sz3~;,gn;Nbu>c߳7#'r VLq#'YGa{V$ȑwg})rHX(b 0r6\WV-(.)K2*&KM椆ڷ"0ƆD=R3Jߊ~Z] 3Bú}/9NٷjE&MZh6,冮&,ZUn8Ԧ_O\0GKe֍TXN3/ef: ;ε@V be0`ʭzip曤{I)(i!$fzJXkVZm9L!75RGw @IGP%~z" W2UlBUl'[}&ԣTXAO#CFt_b` ĄZuS"I2 )(TA)GBulQeB- F$gQJXK zCtϽXG>Ipʩ˖>[bsCt75ѱOPWzoBQS zxTB-UU*j*wQmE$d%+PąbK|F_|=^Zd/i*QUis5M+xUhv]ěv9o%gJރsf~P$ȑw0^OEz7m߭ʎ?ܪ":u!YO#PpH drL{AC\9\,0(i6`7S ~H\RcrQ%1W !)9<{VOKÄmMFfV@c<@\6e#; 4cf0 Q= Ytbd%lJ).6{g1,1 qWjL,9SIX|SZ#YJ ]~n45tl߼NZRIWo^敳y T>Η/VzӇ|QozxKm_z L)rvOy5ۓw[mLw/3a9/d /C8&qV܏nnНG$QWSVh's;1H @QhӀ|yۃ U>G(>} rwS;a?Lbj7ySq2GAL+,Nxp3 `UNѶ rPÚfгFedNvABV9L ۨww2l]f[wd@^e]47-=ѠKюrCdkC+C A\}!deMiRZ悍_ -mEownw_ѯM,`Vتq}L{Oˆ2yYG_ jYBk8sGdtsEɳnb^3ĹGdXu@w\O?#̙T&[L4{0Fm*Hd b,~{,= )TPRҀjB-ͺDe]>R t-( n [˻-rп3?|fLSee,X8x3J3?K\pZrC vL$TB*NKHK2y3꜔ǁYo,֊j\s3u$dd#&7~8_=nR4n%#N#6eV ůGyfA{cS6:h,ԀGRFDr·FƌMw4׫kb8bLj-2>k閈-iN|S-G/Oݜc{f#$:Hc3Ԅ1UpZ%}~֤W\ 'E@;wOK!vMqOc(=h>/OPmV':+ad]\͇g3PJY,š)u"xe"g >2epţ*lsϿ_S'ŢB}O1}q+գsDW1p8^R yrՁZqhaV%9gR$KXpNkPYSKs9f-fvQ*AP:9> gRQ4=F#J[uUNo6> om._r^CY[j+8"}Q"γVrc a54nN2U9';=d L^ !g]sΆR;nw,p7״?Eɾd(Ƥ,Ҹ-I:$K6+,uqC1;5cfL woi?B:D7$Bvb2w 8wV61þ=;8YUOv7՞}upkg&P8;)GܤTDӗ"J:'g&nV07L`CgU0u~#h{00rT ϯ`zC)o~|&?+Jo66+Th JT$q`9N`j`q(%h٫EX9R1;efxgY߹1^<>,FXO.{[{[rү[Fq}{n:bvp2!6nhG>zez#-tx..6zѣM!.}޷{s.U%e ~xUɀ9W@ SVn7gHޱ_ ^Ӊ6g!%61"l+56hחt'Wo􍽰cO}?NKǛ[ۺ;gߡ;}ۋ{hswd1HG /9HIn#8umOfWqDo u' ?\Ogײ %xflxDAw8v0x[9vǓ>T+ sƒ>lxq9Kv6hNta7#[ڠݍTإ"(VN'F+ ڝЉJhw_Dvy|P<"iA =`(G]JTKS#TdR@죝t+='(=S3rs>DU;Ah1{<" YwN=>d_qe~߽vI%7&^~qa_xt])(%D|Lj#WC=Ԥ]DŅ~;bP7]@7U %O+'wuF{F#tTsS윪G D'w[(C 8z6tĸRgI+;;,~p΁+HG|^v.$}twaQn㡯x\ݜw-Zs@Bx0!3 T=\ltQ{iQ:l2gyODtx}QRR$\sc-q WAe4EF6#9c)ɐ5Bv .Q*lDh)W5fH_f/WWn_ wC־; Z_pEsswIAD|, Cҝ\8-/2J]#9 ;Vc]t?zhpo/܉. bܶ2⌙uF(kl'3B5%UOF\`T:D].BM?*)2TvHG" -en/g~#qbwЌDQ5@$wrInqc{ݞY?n *)Ȱku*Y ` MQ`=(NQr.#%1$vsf_6(C2I>JHI㌪ D/mquG8q{?8^u`xwe^#!#ʋA){ Oۣ7Qs5aqi[n<܋Yg. UCTP㬘 G9R=J[ғYY0 [#6DI+|.8P;lu 1b$mɮmQ1,1q;?l oz6F$ !*hO fr̻QiЩIS:8j uf0uH'hoe}UX ~`.=k2r]T}Pu-}/Âi`xcܪIYyel,ƢT],/2=0(`+#q P%].¨zOI." ;Sr*CIWv,fsq,$ KL҇D;iW@5 b >/q/~3 D'^+VeQ1 ep?v>dCkh.Dvxl/"ODXCXtf%mͦhN6 GY]{hCq1xM e!|aUcÚ+r슣AAR}rGL0S8RZX;:tr Y=@ؤ U]BϷL Gu.uSzܯ@afKbkIi{\&IF9; ^B吃uxq}@R|bj3 K2I{*ِdȅt9``?il6{ f?&uv&Ylm;jv\| !kv-N Dvg}l= a\VXxTbvy"ټ}dchIS# AvaB̌)YcSakvw4 Ҷ0w1-탃hE@ )x;MgG UX઀:)zM,LHSBYtaz/m&O/s3hac8 zO ள͇{^>C6#{_ijfKOf0`;Ф . WuVUQNph!v8 G"ZW.9ap2Ҕ=.l8Q<8xBFE9rQp~w41!k{~գ *`H2ch_}Ov 1LxKo^BP;ݠ 8չRN!/&T":h mN\V&S@v8D(?mސ-=ibè(Dꖐ9Iת&UUlͪj9*ZZ!Y P7D Y,htшOV,pC~#Cwd};%j`V`^(` ԬgI<8yJDqd{~ -޶ǿ}CIcCk9Fbc3.؊dc=$ݪ60TK/,bBkh%ƜSaoЂR+vYt3$<;sw1r;NR~yuSIt*g31!f3G~;Q1\ZrE$d+l'1;r0ƢEvLQ\]^Ƽ^楓7Iix~zz9wªE 5k[Vaw%E@tb;yǓc>MiرY:p^\]^K__\c?lȁm`ߦrm m9DgP30yN8Z3)=(!wC'Iýc}Z؀8=0$3no8a 8m`9z i^XgY nX;vΌ5_/z#SڬDy|]Hl b9|b7kCAo8ᖐ녳%ZpiPEO O{(!/Fl=; `PExzPP#sV:ZSg7C^!.%nYL$@P,XldՍ"e񮖴,R7C^HYƾ@XgI^i- L' 2j5r͙|˲YSűj(exGT˲٠߼l)ͥMBUW jf\YgdH}NAO#u&6꼥枠rȋxER[M@TQֿ%3 RM/G5w$:PoP(ۊym7J t=Dƃ`ɭ ߒq(t5!N=U>X7pbO $QH.MNeVXp&ERԱ"zvʬ bcnR[6֢vrm(^o%4IA ۱lAQ=;:DSwv-X#M*!dbF.5^yx>[v01Ճ< ¼?|Q:Qh_*rC#)c"|Ӽ)P a~0? a./OYb X Ʉƥ&RJd|*IiF]vVo mV]HGu^mkO[q>yeAJvQ-vgv5G}.I boZ6S Ykڷ.+b8GojMT[z@!!3YFi^e+Tkt 5v}Q}ni cvni9z+~~sM<o iC;f%xhWuv[{^۠]v>"[QۥDMjrxr<5艸G̎~G͡bhi@hv-YG=w6w#+6NXymj-~{tsNqv AjA I,;) ڽ6hZlC;k)*6\r,ilȮ!͚ 1!'kؐ\隬Jƻu54ik]o$E}ȎوU!;+&G8󼬝Uv ݣd/NP x??.N|V /:֦7.x_xq ޛ?&LZKgIgLXѺiհYVS?Uqcup~yތRry<Oͻ7'ggNZ0>cS8$t/e+j^uo=v.DkCl`jW)w&eZQH>ZRg] ;kO#IVlrN~_6(C,;Cԭ1E'?[v͚z,m4뛷W䮎Ww e?j&hW8aΩ 4#(hg&0huK<%,E'B1$ϥDN+5`N@j'uPB)D·%Bbz9,0`Ms(6bLr&{Si yKwwfܵ]b/ofoM;]Wm1g=6h%eSa0arՅ(:]?gf4i6}n ܻI4ͨ"+$5!XaIo6xɺwr C'ۧ,SJ?>HUbhg0cN˺`,sN Wŝ_\@5lb 㖒TB﯉5o{v^=ԫv;U p}Ԓs/}LzG2ō;TPm?=jE+ј=Rn^,/8W @ - %mO#iۣڵQlR&;U1%"v3+PEHMq IKr W|H8S|>$@ *x#a~xF $Qe S'p܏w2~]lX֕j9l +~Vݽ 'S V;pin`p,1ݗyU #T9)s}o *'ԐQ+1Wp'F)*W4:Rc1,EmjAPƌz+}nMSK=r@.Lȕ:oB3UemřܕǠ"vL$H=8 3DYxi{U83b >4 n ַ*Su8Il?E{ )N?~N{(59Ns$q9N)O}±H.'i8M78MC".ͰyEYɬƤ\$J3G/V|\vU4zr"AhÜT0q:nhNzuԓhJRچÔ!iS\69+Mn9)p@ :S!ƹ 9x- ^i_ \V)Ͱ+R?ZDsri PR3]^97`) fbj[LC]rSPkGz?qooLq򟯻tPkzJ ")|S:3D@0r ?#aB/q}Z1t+'bW*Qw  {f_* hŪn*8_WBJٶGx#rTˁo&Q{B(I`% VHO4e\{$fJN [eNj5&$Fid<%r,x#Is=وD+q^"Q,'XI~$z*b9rF _ΎHh9`|.yy()9 k&zhvjKA\@Q8g,q#|{Dq8O[Y'-f_'WXJPCf֙& FKԈEfVJf"A#& J3H1O 32"d_U 3 w?K 2-w@wO@1<̯\ K ɼeXk^c#\4NC9O-©8q^)K8i`%/w‡h3ƴfRfhzg9}$+AۏeTY? ća?|x;cfzt8;],vWУ@:7KЌ@$W܄oƌ(Z~9=S ߻O f0ShOA'(Eo+Ot=S-i'],sI0)pP`B oDpp`i>V'`)o]'Sn=cU'kE.wr9E2q}>m0z '#LS189>ルEHuٞ MZ/uuza#|cGa@ݙ1Yp <]$Rcx\]Vn1.}56%ߡ1 Gi`2 WqQt, hS?iw' nlд0VSOK.ТL7M_74N-Ļ-X WKP+ ƫZW+6G7:эy JY\SnXV3G7-!5__+{5:+hNRZ"YvjSZ)պ#Zٲ;e,7ZrP!`,椆cUeW#߀=4K1x%mG] 1on8.s8!p7S>Rg W59  fDX D@r=;f/%t@s¥?+@>@Ҿ )1MO"-\?R.uJ,0h|ga P?gWzJ/U\饊+jŕ ܰW8֩C{C(-vJz"!Ee#qNS43}IKdvbrO N[EA$=2P~ _7K|hߞ ۧ!9ȱc߹OvTEfd!p=&sAS}3țy@ lo(q1:'77yG{;o»b̐^wH!HD)hI[&}Ɲkj6yifq{Ͽ-B%#W݋/fx1ۋibQQ8&QU1C\`XY G1XfKT4BSf.ec6uMq]ٝTf\?4'x,Ӫ*D5*,H뒬[#,+<⬯,tRUطƲٞy`>:J!1lCH Oj aM4\pvJ%8O4zϣCM0GKIx/'2Z54Fbmd6JAU!I]x A1m%haGpBŢ^(yP\*kIԫR(,v'К['Zg )b'Rk"1V V5 q`rK 'DiĴQ&N(I,US F9+ؤ VR `Zi k ؐ,5`Hƶ;kX_^~VɎy0BiRLuo 7G\_r鵭ɑ@x7n'*DфH7ŒSENaΙXĹna`tMٝ0v8ԎgFحUam*2:Gw/`%B9?K"z%/R]=f2t/0J1Z!I\8Ny'T[I(Dc?pTG ɵq#.h+ K\)2nf8J,wOFifލOGX x,ZAp>491N Q K;zrN 3F+Z'TZ|9g}ڬAk)+-X"R;ʪ WYj5xt-&S487 _D]}IMd$px[j4]?Ufn)Ҥ$g2 .;\hNWi- 3!Z|rF6'D%h r HJKB+m paZ/s73tXEu#[$'qQUdshȪY5* |PUmWy0pwT\ ٣_B#v_e)꧟^OQU*le/2^~n3P,$}Jm¥6v=' |Z.){7 W>\}{fA"*0L,RC}C1cI Z'SZNpcݽ76{}Whx/7l΋-vEͯN҈hDC}s/ WИ5TXcV/ yWx__6;CK_ؿ`WWW1 /Cw9*sϐB᝖fcS}3)ɕRg [P*):CRƅ + {E&sInVs=fPOn8%T(4ܵ۞R #ds/4mӓryYJȥLTyqSPǮ}])g0J=[ƿ5gm`d2d\*!5~Z28=kRItxd,'4OJi(IjSu fIUalyXxO#ĨF 6cEkd>٨?^Ty5U]{wL>OHb.4O d4-X;1o\ yV֬ XX~~{vs7yH\F} hqyK]|+R MV&PDr-Ӊ-rzpϟ>}lWLȧ{:)P!*L#WEpIJ %RspP$D\,M\ e5㕄Y\i y@3r!4O8_t "ӎg}A Sυ(RI8.|h!<Fx6R|U5aiN|}p7Rj~VBWGTB bV ®NKWGU U.aA VtV *Py7g,Gjpyp@IcR5Y!Ԗ8m! Q`xј̐,E[fH\`Hp+3D$8wJ \q伱 Hrqv?AթWDѫ`HjNZSӠT ) <:5dR؋Wu)4Bl*͘\sA(Aa8LȠik-zc[f1+l6֚ *`(~j:W}ٿP-F*RkBku.W'\Gқ4(}QE%$$&%Hn0nbMTGs b#XzYNW[6zPw8nQ+G]W| I8ȕֆ.+_}EEљOg-΄7/p~WA< L:v~۹G==yhDc `Ys%h,C9PGǰ5g";L_g@}Nzmέ|}PK4d_Z~t fӀ>)YTX]>]k3RqX5zL&K]D[01ya_aoϫQaK -Fc0ċ;0prD`2A6RK8M ֏ !7=b:{6w͗&8;q ZH/l1ij}jYՐ{/]jJ];\j>q4S[']yoAH bjv:?P }ԑzP*Z8鏣T5qfl=Y>-yr,)F'4aD._NLk]V`UW!7."•~sO_p"8;q9Eu cy%NةyΙթh'J&K6{SӮ1zxw1Y]xQύ԰f=rK',0:!w̞1yA [U$d]d}O+,5i`'Ypt^l W,u6EN`NUbbˬ˵. ڸ  rsxwJXrﯡ-hsyh}ߓp}nUkѳ;*3Ԃ<&i7(y-i73Pp4Q R8!Ӝ˱c$MPJb8KI1_[BGv@P)orvc} ~i]pQHY5.J u&ReA83!c\hgFt9+TLYv.ÒhEAh0HJ|naM:Aa񓦰V~jiͯV' JNG|JG|JRNuVkYbLҌ;4%auЬ$0:Amp h샢R@c4H '4aRJSޖWYĕ!YlS4,f5` s^r K`V}P+>6+s"):~&gQ~J|@KЭvsNc&IX$?˰!rzi&fse :5+AE9aFST r&tutV]ph(G(a"*eQj0e$` l`1d>jC՚6}q~?Vn:t>hSY#u]VG3=: Ě+;Ā^1-pSR2~743RLnJ\I\'*Ui`'dNʣƓwz<vdqRfk]3u֬ ®YZeV!Zv0 :Nz!\4HVBzAY5!A.S`hS 6!6nĩV& sǩuh[PJUXECY߯!\6 Ih9!5ztɹk~[dR̙癮癮癮Wf.H T d%ь9i Im2ːP+g@sE>|5î `eWKeZv1kuKojjv<{ixTG6 sD^!wԪ3U9xѽp}3ػ+ 3`]%c8Q +s~m1,5WK#Ӊъ%L X0aZgRn>EAԦ&Xh>)Ɯuvu,jgQ(u$Ԑl{1!+Uik:eb&:U Aa!ȻmݯanxE5>('3¢v Jkȁ<7N,2џ--Eto]Gpx8V`K؄_FUՐ,Tf<5թ$V$QDBɵYn5U3 'fݟλ KxΖy!4QM%CdQߎ&{&I's:e & *\HI%k7b G_\=$_nK!hz?<JL3Z TO 4cXW2 \;wir{Ǭ$fW<.L7.~ucts!Bځu M3\'H0NLiO"zC.y m6fH|®ge>WF/ ,LO|;%t 6=UG%+:B 3bB!hc9@Ez{KBZ.P޺N"]'B׺#xe}Lĺd`\tN՚:T⬕Y}]~Ĥʵ)|0|T!|#[J>Ŀ]쨟 _\N#e]ݯ4e.' -Ks"ZnsKxn s6Qfe,gDj^|&IJg`E]a0x7_SM<,]dY tmC_d2#..1-AL_+>aD{:w$_ڐk417X('[\ZLx>hAЍڔV`݆_)#KQW,腴+BEZlRMvBg/|2J"Ja.ڷK(|1]xmnm慖݃$߫-~ ۸P6N <}#/km]MS髎?""ߺk3Qtmަ_n4#PTnRZ[÷\& e~L/_o``{'lz=I,'eg68x:g6S_|݇߆@h)Ռ-hEqw}owoG6>=0iAq/(V~]n7};xf0.k|^FwIi\!`훯wϙA[_2f.g86hx֒V.>idt2fԁ4 w4mnq~P 3ta-?z }dm>>w&ixn8ۇO^1@>ܙt(ݿ}{`1 ۷`Fϰ_?{ƍl_n6XdY,bg 80j{nyĎҌZRk=- HӢȪ)Xt/?l:g?~UWK[2"Mϟ~hRBͩnx~pp^E0WSt5y4~ft't<>mׯY#37u`Ʀ;6>S#Q~fYܡ:p^5fzeOMWۈ9tdpߟZsf23nz[aoՠU}o;W]e0|Juedhm)_ޭ<xa|6_E؁u BJ4Q>=k_-JEir9z~5[>&RˉɠgN{p 𚲾=~;/u/jعOO.|dIc><=3ʏ\; Qs ,2!rwp Xùwmϫo~{lQ|ws Bmh:B )7ud4r)>qE-3ۈ3E)6rMcqOv8ڎ?4a)%Гxz>FYy^jr.zXgJǥB]UFt qk:&M`:6; O?=.r|~>M引 PĽgJ C:&IHEJܸzK1)pj߃/bVJ#۩?6Mw%1}P_iɶ1bWVQ}uy2 l KX%'SʹB*QW*)BQլU)E 1`=9303XND˯:i VSB/b1SPn  w{]'mwW{B{JLyQ)x*#ALczY4oe2rm2ctjZ[B PԉBK%*Igj~JdʻcCKWCOşOGlRZ,|6 2}N4=+(S/w/1Rt8\3p|'$բvՊ H P Y&a2l,? 2"9$'Cl&f|!=H.3DRRK=̈́An.JJxǂ{J'&sgGؓaD bqþmmn3l,I΂V.2u퍧ȾVTȥQ Qh7E/1F "->+gM4!QƄhd&P8&vΤr8Dۚ<їc42!*d Ե䔶XQ::QeKgC)"4 cBVq1,U5TJYuv>3Qu>3PdZL΁qNYkn[>0<7c1;>K@%Vh+1([`Is%T%[yJkԑhQʚrPUTVHl`pW^ UA'()p(rٍkjmfJz0P9 /3Fɫ dbHB  c`)Jm]%O?~~k.Jya0%PY$Cº>b/ "%M^lYP5@(1LX Qg+oKУ,#XZjNfO屢ॖom<3MBO.:79˝ÓgOPX YPV'ϔ4ٓrϤji3R339xqtϗv(+Evկ>wohd=zT/AB`“cy[օjzKeN=lf5zP#3o:rK!v4NIJ{"6 2Pzy(poE w0aa2T}S!f Q@Jrx`<[b:dBg> "!Dc4_jZeppCw\y@6m u6du~x1Eٻ3NXu2polLt,9NIS7nO=m:'K߾哽.1'_M~?'ɭN;(p{Tә4S^b# S95oLLV{M~  Xm1Gډswb+^\`{ ߪ]hS 2i s[CE_+~8/ )f9r6wϖѣImJ $%A=eHvJC9$3{z09SA,ѓJTFX-L^cGJap`RBO ګ6Vk-nɰc3);{sd"#X)7ȳ΀ױl'(d- ^52CdV=0)e0vآY& "RvP2ɬ3fgLchyң#Ihd,Wk9PB/(2(ɎmuM%׮EAMUir u3[VkgKLH\\i=P4l-PJh{$EvJ6WV˶MAG@O9v73]yp(^P=q#g+2`B"AГL2N*؃PV=@5@22r(=zj_x^ =3fΘ5@jSAVeZ>ZFhyL)Q:I ixicU@ iP>t6pu (5|XZ?{shiysm_S~nAj$e5k]U*+D(gfg4Nd.j5Jzq\ThWx~ Xdbs<3ZbTBo5e]ap"@!f:MǠyhJZ6 eL hGб)=(.Qڢv=õ #@c򜡦r2z2[B,m͈AJjUN ى6(ƈߚWqvBS(m؟ wWB=(7Ko*G+a?1v[MB5j!qŻ48rjwaj{B@J^K"xD=HzH6MdV >;0IA;Ń]{P+)6sZH6 鲡26LL}[ ˥ cf0dS/L41 C0(A aIȼ49@B1:tjUShd Fke]޶^&$VJ;96@-ذގ'm]i7E) vl&3B[mβ@ slp.ۮ6ّe^ Gc—HA̽jt&:66TkQ'yՅ{fB^CػؚUaّPί 9WGUVښlba2 1I#"jK}.9!4h Rb\E6b1^OX1(jF#ظuJ9gb! ;5$vec>cfǨFdf -YKw8+2)%tVd*#`ngm} ѶkAFSjnfWbl:&^G,!x:hsvJ3jz\vf-1tT\T-1`}Y"1?'fw8JZGY6:" c2,) KvIʃ57f?I]:!3N&d&@?])Y ZA솰4{S^j cK01 h79lv(d&OC^=tgFo^ qL}I 0z "Zx%eQ9x$N[UjaO#/߼vD[lh0.e&e๞sMۚ FtQgec3cFRS{5l {7ځWl'<8~kYUcq,{ hn<Q;0n ß73ݝr=hz>Sp_IHn`vInR 㯷i_Zbg:a&2WC,]L V5X8CL2߁e3ۭ>f%?U5i_10Hq<%rܖ-ټfPj#aᲟjYgQ0(ޖn^2x-7M 0B!8(F1k`.\:pz)̈́Qz3K:0.ɡr( ^yb!L*cX3unUTS)D8Ck ]Y =:[L_C/+ݷ hě8x9fB7\*ژ%JÀ$~8 2- D+*' 0ZM0Rg"f犠湧6B'--h~S287|i=|"7]7G$N9#%VpjEb1;'X)Ђ7 s_oZٛ (?A[obSNXrI#Cg%geZzCU#ۘiT_QD7xǺPg O*ћ7n>R.عńfR _&KG}}$n.`%miMYLYPDr)c'imldS {H3<w h H$tN,j8 \18N87^;`b v(frdOU[~!dXQAә1jQէQqrlXM(&rzDXC||Gj)hnJumRtƩ(%)``hP G0^epHʜ~^l8J;4CRoZD8ZE$d ҊrѣL5J5F:.Zpf܅Cj>DZL۳pEv⒃fm wNFuM:\u7k6l+7r8i$:)͐-'CT4/4\' }zC`wRf$Ȁ ~u,(1dx]K$INc.Rb`EhԱ~5D9ŏ?~d Q^⦢H=};2ibǤق]BYsJBpN" Q5'^!x"XizR%R+‰ӍS1'RƔ8EK!Fl`` 탓E=SVW`T Sw::2ԜaRR3-ڋīz'!)﷝ ^aYIuqU8cM9|xxP^7_9yn.(/A-PYpP|q*o8E!ZE\QTW )hGU!5DADǸQd ^z3Di2-ǩU)> ID{J^[%T)^"31j(7?/U*]NZ}aX=#KfHcˡT8Pw|i$2ĠPPǹ3^.N<ZbRY:" :SlM:?UZ`RM:B;AVq.pl4I`̥:oh7:̮utv'e-C'*:~1L|f[fgZ)&yT(J1^J֩TSR, E2I*€!\ $bLb~8 +zLƮ9-[2pٴҥa]Hg NsBXe13r$HX [FhE0φތM:`iVvBڡd' DZO\0 ^-}Pju_}"ox!Kr#/VEPZJblhD  c8Ȫ*rX#(C|صň^1۵,oadl{K'o;"uǞF~:}d:0ɆOw8? ]g4HeD Lکz|VҬk,wꇗ1X|Ta~락sAyE<|aLYwӠF܆)O|M[?ϐrٙ&hO9g@i7XBvA։]vk)2٘vvC,Sʆs8RmjHb5Jit-ڭO EtsV% P[7PN @ 8Msxx@DMmQU{_f\2~ ¦Ozם Z/e=J0cq̷ϟov}}suoǠl._\ .;ave)jdbmc()}wo7Mԗ<{5f6܌n5-CL9b[$̧ׅ1$шA DcY'IjmEB#I09$u*%HR0y6ks%(UUR'Yj*ElޭQ!$ I 2] 2eWTp=x&ӻQ,%T,%y2}ڶ*k~ԢbuUUQalF !IMQgj`,e(g9f#g'дd p}@H!KNд3EzJ99Lo "-/s5|1>I[Xsb"V{uY9h+IۢDXD M=<+Be_?B+QQIjC_=2 ѣs.I_Bɳ% 98kMeѣsmJHmQByu Ԙ5z;Z3Š)zc>JV2Z:3ᤷv6B7>hƚY]he"\tO&^k]űF*EAXN %LR-\˱c r;kh)<("1:Lcʡ\mwn|urus2l*dq{X?O 041} ?G@^:_k4(p ̰3U=HO!w-G_qͼ`JMEB"0I>tJhoSY;1VYAYH'+H&e֐s37q4*D!bHtBx0,!xc1wB 8%L[@o$39J '{Į _0E_ss/9 gz 2@]2xW;KL!eG0q(!N`-4X 86 Iq)6& D3w;r8CgW=k !혩Fc=omhTLˊ XKD^9;8/W߇<|_R 3:#ILjV4%!G4zQ_e= t&lЩŚ5l%Ś6M%/sLf9>`v&f: qj́^ÇKqݻwz\?B`Uvyb Փ#( p ,TG~ $P?zQMg:<7~L5:HRG8 [DpdS͑ H,D gq 1$.)-j-\~+W]cCEM>?XM5ϻ".G5WEޑ*7(KNr ǐo'OJ~hFz8nμQ ^[ ѷ-L~f|q_Y?t뉷KlHlfb뼄agҽΫ`O:7|?p=n#?dٖ\ш,~.dh+DYT}`{^Ty[ ht(J|W]F EVR 6!`#1S]# BiEȵ{(2Ǔaj`G*P.?vf3&t6*rj(s+ec0E3~#;z9)]:BF 6}i(ޙ.tOqsx~3 Bwˣf%8V\uX\^.y^rqyoy~LmjޠVHO۰L#!߹v)Y-\^h8=vJ:MӅS*`ڭ3u[h !@_vJ:MŅ:#ZW!!߹vVfi(rDXP:dA1Ɖl"rzeh$51|ED(j:wvˠcF8< |Sn \M'"`:Iވ#;ٳϛ0|V(]SVyhu]>:$ђkuIH1v'jrBLgWa>pfgVyVjc1MxWyz1__s묧gX˩]N FtɆZK&&~;Ph$~ &/G{=NvO)s?,uu󩞩O}i3t/IADa!8&kuTro\l2Kz~6Vn*}ugʚ=;]lmQw-ɿc L.}aSla㔆fCqD-#Q, #f  ˆH TɈ`X&sT) wR4ҒTQ 6,F d,0%@P, 7${'R4"^}1EjѦa4.Fk..0)׼y&mOUs xҘyA8׹Hls,3,xv$./I&zY?壔W^?Up-F14&9ʡB.._y54,X?n_?'p.-72Gwo5w[*n[p'|rDM"L6i i1.+ >ioY+UXlπm96*gZWl~|uWn vOaVLH`#ǖ(.8f f/@Fv-\惻~J -&yWߝ۟rj;?|TLko"d2uvħMzGE*ywTw>RAxx3bVwMR"Drx9{IDe燾WnDԭ*yXt} bգ8ʧ1i'␼ og&WRub/HnW0ҙ9 Z=-وkZWZ '4욖4-·|RVQ&n֯M[8*;~1Jα,ohR)3Oc J-!쀴Dg_o쀚!@`zeOIpʮ3p/+5ԵW֒`^E{e%>hkdjXMo[r5҆(XQb̮/Ie4קX\i?E>4,OW/1zT[USD^ uP+J+ΙPcŽPABBR\7nG/Z~;J9ۣ~k1<6ëgiԗ!# yyCe 'Ĵ1%m91b2T Mx]YO-IMZ zwNK-kSb|@I\քwiS{qT/=|$I }W8q3.Wx*Gaִ__\U#tpe%TO#E+5Ou'i$0=jH %@q@|́ʁ< DP)!3$a@HDsL#%BК2R,*D;0~' l(%$$d05EET)ahGg.},(8p5 [,@P@=By"1ǘR Z{%9,I5ܠ82K&bY(S̭@qp%>Ҧiv|c"5W"O,( DAkB5 OES(hѠNӾu.`z:KsgvYLUtQ`@]Ft*S D+5l5 85a 2V2b*ADąQ 8f{ˠ A#<=""19Z4v q-L(BV@b("6ND O6BV?]rR8gdK4_޸=@"y=7WTDo‘+*$. ׭xPf7` Y2("`ӚSW1ZAn Foi0yDQg:)86n鱻F_^;wT+& `~a;)&aD|QVЍg 9i<.4Y7? dMl奔ry-C}85kxys[KH>Y;vI7s5[y[fk?1 ੣j4nKҠ*k*؅)z_#`q8G )a!|zpŦ7hmMna.ݎb2JY*a\DoJޝoYs\YdXc;2c3@ۥI'6q~W3txfў:{e}p:&\i5'NG^3fqO0 O^x`1l[cT#͋›XebZb#B@.׮LqΏM^L ҙ5utNr\;fu{⣅MPH5B^inW9K bU^ɗ@qq[PZ)J^~#2N5_ X^ܤls㢶Bh/|nIqdPJFm^y!sviwAw Io{FLW_`p-k9Y IUS֤UXKn—GJ_h/n6/*+aΨ؏M\MMQq&r(ט=Ofӏp!_3ZNMvP763~r dQfi:4^Ox/WI5Uڏ|}Ew/+WR|*t!7Yք]NfrH 9>nKSG )'J2 zƘTB+aB0xXt0{ J;'wU5Ak8^8<!!\iXةCn=#7͏&Co$Byp͊/_nt^ ?|ՙl4`PzpE;MV^7g=$G& _~/saCv7i+U3^W󸃯]4Lu,K j` bpëp:kdEí~'6Fk'~p*{ " O*Gީ)_uQ]Ʌ-9Z*뚨hp$$J8ڣ\RR U!9dUۣ@EzE6%e`|xL JVϻ+ڃ+r{'<%Lgy[*{L&I>id94e_# NeïDxɦU!_>>kR_.,-ۨ~-0C?kzc4q1"Cp6rr|̵:yHgqX۷Cd#s= i,B's}٬,`r$4ם{8iNB8%t *Ѳ%䌯^3z⾪HRPzX),ןV*(sIf1A~؍0s xhЬ1[qB(1-TΩs7UdYX9ȦwnA@"ܗ~70 Z1^DE ԃYWj [4H".S>Tl5ÈtwII'AOM[ g+ʑ:J!<7iM.,̽0)Oo_`RE5V>] ` B(u̧2d$I':@_>^Ĝ46gvXS7D IƩ?FxݭS!dwb?Gc{?؍x䝊l%|/cg\bŸ w%,$ Pf,+5+/)\Qp+,&ђZ1 )u/lРCvYXI=eqF!l>-mLj>!JߧʝN4u55cwnqϦzu * Y?e!lb\`L# л*Hp@$ߔA$TI)8BxNjA/G7W DjԊ mֶ!IRK 260&W]0cyhY$ 4L6 +{ƍdO 6>`+Ìx"q5Ѭt9nE?d1:t{XךʳR7 6z}^NtQW7Fwc.:N~X8&/; P%4 uݞ|fL˵IO#8Vvު9Sr0Iq?E˼=@-bےyBU):4kKRAdwO 8{h- h f}I} <(y1N0y@,19D Tk  PL dW(LWoN.`2(@l[;)1aɳ9:H`) Q k:?Ot762Ð~} f R,#'B UW:@(LJIs)`@ђixdҚ}΀3i1HJLv\Y!J%o_9 :Sw7ˏH1F9 n}X=o40jkr@ NOgAXzU۴rSR:iox^aRIBO,˗\_G\}h2F3ц&__h\γ7e.^ߌF7 yꞛ ( HIOL҅PƴcJ~x#7t}'!a]Fn O|˫۫`}y5gfm˫Zˏ廙Y05bDwPC;}}폗=L@)A&(:ɿUڰ+[K9X $I:6tJd>a(>WX0!0:hKǪwr D%H"H'NTt<^wrPo>s mİPpyu\ 0gT'UgAD_6]4.ŷrO;-K]8Mp!N=B!/W8YUEDQȂKŝhtIs|R8Zy2R&0u$C!&[=4Hoc1@SaHkP;吡HDɛƀR V;pQ}瀌DjGU;A{R(wgıd1{|IYև- Ù vň8>9bl8_%Fz췭ۜyCL-ir4;y LEIhGos*!!ٷHJ{:~); >] !%:,v2PbyGQY2B͋;yV.,1e_OK/= ~["IYv]g*&fkZ$ lg 4d2eq sA&o&r]/ _x~f]A6Zg~fY48$(R"G !Qg|ݶ(5wjaC@#`4'4D ij[d>e5)D1l0yJ4GcƂtsn2md?=Paw#z00!YӓPqCjnq] I ~wAKD˰oE.-Cc 'qZpj&B& "T"FCc\OPUgŃ&,b1J H?Q>|vցhT7T\ AsJ 4$013B(c6~]E9 2 8YB ɅZAr@94SN3 ',P7%WtrC:1Lv]١HX0J):TGԾ9E˽] ݡls+rc]7?(qF,7&dDAuq#a5kV,g8iY2T*uMǒ\0.sEl]48M4Vd0B`t)K7a6-cT*ێ2X^}Nʀ~~*nGce>|n:k=Bys#*Sl^mIO?^zہ1َ.1'tTղEjNˇ4Uknnuy --btI7 /aԓ^{\˟':c'BGQ6Z_} 1@oIJ3(]Rѿ|Jpv,ƔZ<v0T8+vMSOa>r("uw?HL t? <{# =}d X4 rNy\Lm|{ vbA(1&nöNXS#JL?B|23do;53 :l9l B|ZKi>)nHScdX̔*sxK1P(_kʋ7Oh<@ a|sO~gq>?Lj(JxNt-ez9=h>Wn=%gd2AedED2 p@zp#j eq0_YcqF@g,ftЩ6etk=J͒iCtw67]e-rDBxti}3 R,p+,g9RRm1#J:2Y06 \W$Cu#)e0g8*^$5m:)~!BNQ͞xLT]& vdnyE#Ŋ*V 9t2ރ2K]m Pl5*Q(Hw"6N[^))EDX#!DfM÷ɢz6fc4[2Τqc! _ev|&)畝z>mUO-I3[MZn<9v~gk[fCfXB-`c*r A?&$uȰ4Q='ŊNYRWs>G2,;>vdEo3ɅOtKwC_~Qɠi䨓aP"P~U/(ZA!&$#w}a9z[@O}А(: 8{%N!gHF1L in[dOwj0` #c%ئݍJA0zEANkVNYPe¥)#R@zb L7nlޮbB_IwZ-tjtP3XW7|RFpVͻj/dl1RBZ`wh1CYhV6xŘ3Ќ/}R$B{Wc1.]^1:tĚPfgїg9>w*X[U*Y݈+cqZcrL,4(  (y?(h1“=/ eUp%gc76}r8% 5ؓfS oXZ_e{׳N\}8⫯fSh2zOƄZ2I,&-&)WJ,gY H`*?8MYQ W˩wI_OuyIv$YX,D+?d$NSJEO$_e$H)瘮QlIY"U =@ScPQ@19ns;k{,^B S'f9 )ʩnPS ;z]~bت sxZ3K3 -Tf={P[#8fc` ho jP~.i]`Fcu 0QW̔+򄓄 e* zGq"rR叁v(PY &4!cAR'0b1I# \XQC|㑓{9xⳕlr,=rҞVK_'+Zk2բܯCBA:\ 2ywX੾{=5V6kAL;&6/y}[ը*W1hcמ (NVCQɫhr q' l#6>;->` ={ۭ/H1ٵSߺ;h )UcNs2&Ym=[a|{0A'2`|n^!Y`-3yau-ϖ 6lTĦgOEQE zLp"fslHwoyXO/cDfEkz!hι<)JI{(! _U[I^vl/p0vd3L90l[e2lŖ,n( -EXVuӱWc}'+@6F ǝ]AQ׮SG|26v0Ydb~1?_i{Em]ӡEF08q$Z(JRKH,ܭBGZKc &ͦyt+PTȥX#Xcټ7[}FG[1;//Jc|/@d7Q;yΣ/zEIǙO(MLx%Vu2DJ(%(X2QT+)%jG$j%]q'/.=|s% )WYk\q]X 3ˮn:+yȑD$Phj 7( JY-8Eg&VC9M!Vn2b?WUj,sFX 9_ _-zgZG>9pΐ n ϚG`d%)ǚD ox* 1I)*l[f08!-$0K~|U,լm/?53/uOGFswRxt%BywqOOF "_9"ӻxar2v4o0LxtOO ZGѧ+*8"I0"B;_xIQA6)_eM$;*T/ G#.cDjqՎPa*y B}G u5'o'X|?1l"HQ)mt60!ƻ]J0*P˽ KRX@WU$%=@ N`Ѵζ/zS3b. Ɔ\ ZF@մ'=IB A[Q$DC*,ɕeCSEN :sx"MC{8yᤳAv(y#?b0BJR̺#f;G|Rb-Ԓs) B 8Kb2K%F8UYS!kqJM*@r72NJxPKBk$Jj25I,8%Pk)y,U07axhPn%*JVU[i RH\+([UVl'/%¼Qʈ{[0OؕR`qiin⓴#O!b XFv/>iGRψ8%1$(R]Me)@06D"S`~aӨKQrTgñKQA,h?Wy Tܶ&{/&Rf&ØvR[dU1h0%15dU6|j\ǨPw_nG3]4aLK9OP6Lơ5e_2/Skf{d.Z_41b_@q d0q^_}FbҌI'ݚ Gx 14{{Z*[+xw|>/ֳdN"jL%r,r$2qiJNm5`ckP:nwY@vN-( ١IuWWl3EŨgk`Yy9;7ϩr4PGgGa?=;!GJ9R:7@O֮l2IOo #}-~Zu6isT^L!-mN@˙kg3չ|h]/v{]˫"]oa0S^kQu?Wu/S}0NfHuQ!HA6[37==;ZԝQE+q7ע#\ovlP&(f'R࿆Nj@-#-/b K~%=k[ RdVVm_`C!.nXըKX[ qz6,l0 Ե4(֨_uGX{].׍w_+t'o#֭xkO\W6ur8omTMpժu\/kBO"eBcѫ[ހ>JhI>vQҠYn&tW J[eVi9<¨DQv\u Q~WPȢ8zE'[75“q)cm]s_s~:pUHReF ]|O_-vI1(%ܺq7Ҕ T8"-HcDLt",T"ciJT L.Şh0/.+Q!@ybЏ1˯ݎ"N1`3EUi"9\,K$MnjT`#LTaԊB-έFo﾿nͧ˙* \I!  `UnQ{g~5sY sMgϏN <5|}|RƵwb}" C+P}tu l0T e`5)?*l[f0Zd`EER*듥T}1L':~^PA-\7+W51HBdbظ'=RdNjwԎP(R3OmVfϒ:*2$u')"b8;meFX޵_X2`n6"+BQ낉9IƪpRz #݀Z'׸ݳ:|-b)?+1sgpB-f IbN NX+14,j1  5 J LV(TpLrv {V#wݸS;?;?_!L Mnt3YUGnn߯ XܹӺ.A?^LPۧ^JzV. PvkV)Yʓ{Q F W0:껽QBimrU|>Blr(j&W=mr72#7s#`&*s8\[\>K#W|-p5*]Uf25^Uc/gjB$>;GbW͙/WCrP0yfnNGw7ODSlS{hp51uj{O>{^1}rþGb[gb飿jtnYWh~^|O DK $jCu$#LŶ3\V0Gӳk2 \.&Ci2U9x *JZZaHa@rٶN,wO~_eJS\JZ7 g'XjïsoY-99{x_#A^9hRSXI$96B5#s7 9Tz)͕HZ~lDI(؁*)R?W Pc,ẂJ L+Pda@Ÿ߳~#lBG yoE?>ꞛ{n 禪{f/0 W{SQ)֒"Y D1eIB,w^*SFX "jޔ\/Wᦻ6ԁ:;!Ccúy[z]%,G7E -t}o)7nا57nFnkZÿӤ@\0l΁oJr1H+Ypk^YV/މv|M4^sF ,vʿsfLpRR1)%ow\F;u_FKk LVZ7 [ursQ- VO!3-ظDZs ``nT0ӎ7P\3",ބ%wX>;s_{3^7~/ }vQ/gcO0c5kǼaq&/ Cح7Fmo<}\1–Xdz;fy%LrGj'F.Q}P "Z64d0dcm~p?¼4cpYb@*q4E5Y7Q6V9bZ{іnJX[ԻR]4qҬ6 zG3g,ireYB4j+d^iFv80'?Nƭո`B𝿋?Zr/?I-rYzEŽu\0Ɖe{~|Qv;l4v+(R̭" j֛8Q>5SJQ]ΞeKQ.5nFW sJӆ:;M1mn BXNjhM9R_H$hurg0Xq($]}0#1UsvD!Y{+ƴ3 p_}âi0%W'BKye![Zc^c؎_D٘J$HDRda[)$/thn"/qhu [`_ Z?A?eZ`i,%˜0!)AT93/1RDт!D~-.TPQ*J3خfO&TOڵhMYh*A9/wf770{n^YZVn:oa;0Lod0ϣw8 ʤS4c$q8΍g:Tb _/eȨJ;eëK#]k#bWpʂ[eE\a̛“.Xn1)SB_t^S?9қq |,V`V1XnAfQJ\Vsz0 J+!)#,CnGo3jI>޹D_u^?yQHoaz &O~|L`yC'^u\;{Y}*LV ػTaC:Xsc|W%E0uEL_Cam P!Rvԡ1T.ٴ4g>ziLpuVR~Ky5 D0pv)&'Ԕp4ԔueDmYoE|@÷%.!y(8!ٱX"3wkbGN/T ;םN~ v[yqw7+[o߅mrygXtRipϦ|UaB%6+DL7l@n]_$#11k ޽Io{qϢx$&!)Q|͗Z"Dc]5\]b1c5jl&515![ vz7wTpďcv B88k|w:vuΫ wLSndNvzDyAFAd=`cSDks267ulM[75S9J\Eiu󍂾?!5&$$t۹o\iϵV'Ђ#THC_)ѡj>d$vCulW4SA܅, GtӘ@E2i9L%3Ic "z{ww(&(0c)hS)&`Qgy$0#cQay,lT Tcݩ@*{<֤YKsLoaGǼYw5|e)`Lb\1ZF-1 v5\#hw96$FV>;+uG6(5dC m㈽>&&D1{:9J6cGY#4L1t 51YX>HkzxSX~<3QF*-iHhi%"=5-ӝhW~%Ş؛<:RX@+=Ǵ/\EŸY%gĘ;c/ac1 -V\? d1C(ܦ1žql2ΉCk?Y׽Rf93(AxEcܱ:4 F %źmK8_Xذl`͍v:7D`%sĬˑ 0Wg!S\sA4V cTYN)eNsFIn~^F4J\ƭ%_R؉ 9-v s'J Ñ sP(H;@G[ 86Y`N*NrCaz6;ޅ~Q Sͬ\!xmY'sK>؃* !kAi^9ã)6Ne ΝBI64bʿVH֘G۶!*,%O\K BP ̫vr-bx QJKE`N`1QPQrdw&;*=|=CÕX /0*nTQV;sZ-m9_寍2/!6~6nqQ>moFXv/l *,k=u9hM^s#|db[2ǎ`kB(aGmӥkU_5u*q7 +elbVz5͕%H[)D9 w\wSfuCo 'pSD3|5}ً'U"'}7iۃO?y.[w[~:ƢUxi^7Mj UJX~*2<Lp&`ed-/0L}JTi[?@9:& E4I9'ڍʵn<1vn;LuHqJhvABsM)ޣݨ 8|4ƴ1vKLjwK$4wK\DdJݒ:5=#16^=c#&mX>CmU CB6~z Ek@gI{8+,ّ> C`+}l˗BR]$ExM g$0d+)BB⢔-*PaH=IMjv G ia8XR;p-kLc%f\< 8[^`-APPlCȶJp#g"bdUi֋d)rXt][дVVmV hvkCExJ{+MCV)c.B2|8V hvkCExy\\bU e{2L*.%'peB`.t[X*e1 !r q*q`˖D+\;2 P-V;EHDQ1Q8Y4N^nSV)c֥X.G\9%jҲ@{Z {.B`Q{, >śȨ$!',ƄzI*=}3& pA(BZNG"yPr6b)FY/u`۹ᱡ%TiU[K(>[^,c;]zRۅsYtȀ^yr=8Y8^n^Jq:s:pb@s[r,ZSTޗ_n^-Aαvp4~j[ݚ#g"ڷjqWBHjGXTVwM.=Dan7A!%!x j0-W 0{7Ja3r7!^jW?g=G~}}e|3K̻KPN[\Mz9[L~7X}g4/!<nmۃW?RNv 6{WN"]NZq.zA7SIy &y18Rfɋf¾O"A]<5 |^3B3|^3B 6 |^[3BKFf>gMLTRn:($~z(*j 6 &*T ǙmðL?&d铴qёݨx1{u&bX{H-U4*\<=wæ\|^Mk%v; d-VEh?5$IY%xdu2R)90kH"<ڦR)90kH'IRRo>v@4)%ޘE2ҥ5dsߨh޵=iHRҢ4$)uy5L^͒P4}իt@&/yoH nf)y%֝QFKp!FMaD.&A`>?Sr*D'h(n~>c9:3ΚYbh:U2FXCSvē)%c41CoG_ﲹaw:G%BNs_f0 _anAЄN!^7B1x6oS.ϬuӿƧi#'U_ɋ5+q? Q%ԆSU??]Z~},a݋pg_M4g_o)/oȡ5:XtyQ:0cR15lgfUE&)VTP4 xrUj8Z|^1,oZJ[L16 @َP!Siy9HWc\1]e ca+h.]m|[-HEb1IywCu~T,PxRJ݆1_2˟n8.J2i7? 5 7jr²Hg҉t򽹙aI745gSC7g?n(bG¹ޫh\p@sG$u1mսL384k{{͕XGTid$ՕVR{Vΰt9ģ|Gd!Zj ~c^`Pm!sJr0kd@_f'3MaUO&6 bSfc71N(X>2SA- p" 5#t;ō@%h- 8FYovK5Wo[FjO,Ⱦ /MEލ")bg'l|DAnCW¶vu F_pUoq#?>ILBv^N|QZ{pG*"@L~ wJAW Sv21pC% ȘJ)bVSJW{~2 s۵cԃxQpd ،G.±鋊{4/5* iE @BW*'2Az=Y핷a 'LlA4,1AAan'3QeL X8G j=:3?h>h *Lu *b&|^mM3 ֳSWETwT;{A R=^^T94^3jն{sל TCjHUxUeA-^nOѺED'*Ͷ!fzk|81UlG> S}]LWa`m+JpyX|gv%(KkBqmdSJw%0'GVC0a[ |懟OPEd(QVŦ:q9MQgU|B_(-Ju*6#W6,7&6%@L&`Gg\j*Y ^ AET"HFWQHIM%hB(b|DxM%BhvFRF0G%7b"uQrHa ^{!Γ c4ڲlAwd5DICѿ~0z(m봠dKMDraoR ^v>]SJQjv qSb>ӑEiPs'.EN OkR N1qg#}w0 *ޭ2b {8]M'}_q2|W(W9oM$\c$xW`5M,EJ5B8e]Mjoq`q6,۳ '{ǾN6&! uur3?H%22Nf`Ak"%1CLoIYZx)&mIOR;S5HN<&5ᑱ.50wNl9k$W'|-+Ư{{#:ה?ƨ3#kҞN$t1OgtF ?b|y@`:dD&;jpI|v{[W__ >oCb9 Eh' RK$>F}X!Ki|MṚyb 4<yƦAwX"Hybev6hj! T% ŴK,ě9 3z.y\%tOT! x0Qc5᷄>D٥SթN5 =5G7[D6nrPC1lN<:yqc>q !:ovecuNU"#hjSK<.lkHMBZ+$%%NmM3K%1hmZ!x^*KƙҫLhJ4G2RgrHsp,>́SVSVZ*Tw*WQ>Y)|mtME6t 6 B1#*;lQVu _q!\Zymaa̿Y2b"%CʮL 2<48Gh?& t~AľAľ/h3Q+f60Ar3G͈s8y"u]J Ҽ+"_=dA(5hHUC"?'.I(km$%x`n t0M?bՅlTkj^\\~(\=2LTúճ/*- v1wh.d;aKo/}1yRLJ$aL`*RZ1L *>UAqb,%02:܃ *yzŀAimPWI#Sq0 Bt *@e۴^՞SR-߭j_\1eu24Z(}P2IJgxgTA"U0d8_ Ybq衙]TjIfnpS1gص<4f kК-M?B?ߩ.?;q@Bs..}bC̸"(_^(hZ h`QFaݥE.;7>ؗ U`Pk]۾jL0F# G)Wk CJ>.Ք0BBViP1ErE5&eԠVe2u ᪯. gGcCL֡{_#?^꬘pD#5а 5H(r@h )Knv /!GA9QD>SM=?H7!A ,oV,JQ\u,B"s;heu[uO+)ݵ<iybH1tVt=*}nvd쨗tٌvIsoXjZaUҳ$1c<z(N3)rt%hws\5e,OU"!O4ݫʇ=Lh}.Π]xcUsb 'n+JuK \p%(ޝnt^1s-DbCɀq $1ջ< Л@EđNz#T/]5h V)DT Va yVnOW,WN _ߘHžz<0n2Г~>FHTIrMN\ _iߑç&nP>rnي gέm@aƞmHV64=*6vy xUs;0xJUPDP%4J ;="r ͜As#SC'@P֦pg-q2e\)qm! NVJ JEBM=_iꊄ*9ͺv+JJˎ2e2h(3W +ҖjhF Tt{s!h1ㇵHOb$)2KUq7h\I6Py|jzo} 63;o,fYpʦRuoͫ'gu^3NYLt"ŵnٓy)|3+/~磢h&E EGs|K@*bD&sX&!ʞځ>ŏx,9S*QCNZv4Dqb۲kٽ +9 c[:٫ɧj&"hTw ?[>Ի%<}4xٸv{gowף{`yl ^bW}u0VDY8| Zto<qDmļ{awHk(t?S{ܹw wI+of^/EOL?l&x\0 0Ex4 څ5č?ZqiV1hH;W{<'eLb}|8}~KCX9O]ԥ$in>=ot&A47=T^sEv0YSL9N071TJoO^[L XT'\)FLk:KT N٠2fkeeI8N.o&zmj4Rp6H3#$9u9=8-o~Ca($0[ˏC%$ ׅs@|.ZJQZ>^ Iш)PTy +vQ$_Dq*Ȧ8MQKiKM3\x/S%=B84GhJ16V)a%PK+JkPV-xB HM0St[p2^`ql,!Gca@A5X+b JeZ`:(MJ;5TglgF*G<25 bYU)I\LzLrO0UiLCY,yjqJLIgf3#V;csBe4@d!瑣1vc5${[EdXS%_k*<'bgo `'b𘿊SE)ROpy5L$Y.JcxlEDvQz Pn݄FwQr N()\ sN19uw+=sq@NPvQ凞G-C׈Qp̞l^z||2}vc_3cG~{#׏tEޕ%pӻrd!(YkeZ+\&e-"PE7>r$M0*`k˷?F0v-JUQe8l`(}z1)w!?zY@0te*<a 1`\>)E)Ml(09(v 't0u+L(<# k6DAίk &!6yhQH[Rם=żP/ayiu!ehN;0 ʴy5 .>%0(8M )2IlE[UIMs4yFxsPZLSYMnu}/@6qPBok/Mk8>}ڍ /Y-K<+j@3ohC="+ڭɿD d4+|i6}{m<&__A\ör\6UІzWEpiE.ǍRڝSS|y\1gbHp2jq1͸2+T0|y9ɾ`Q*0r)Fep,"re8'HW8}醈ǍΖ}V\3V Bw T@:eH198LfL;Rb%r퀡3@U aBH]:g(3!# r}[ALftN2mON™–SwN*dQEZq#ydym3g^1)sҁ.aef@,,*ɘ.TX.CdHP.$#30sel+,&3QmZ40^VpUq" iC+0Qnz fwr9m l`7ada)oP``}A2Js pK.= Is㜕HWpk0w)%q+0- Vmqm]RxU5rRUWȒ:T ?kx9Q٩6ŕv]+w=E bs4$qo2 EJ{ RiKb^dTs(/^8@@`D<QD'wc': V;E䵩)+PSbxvEŚ=vK8@J"+YtpIߋBB Ii]̋*F| dl已/^t`Xƒ6 8@J}+S ^ɧI4j(]^^DS..8,Hy1*Y1 ܉*h"}i"*a~(߫z 6T b $zuHhN~>]}[\bsC7?LJϔz^] fluꋇ n`Fh#خFu}ӢuhJ!chi뼬Л?yh{ >0xQx\rQDk!&51v4u H}d!ˋ&M4Wq]؇F0Uq68x`t[|ʛGIYywUƎB0Jo-ʫxZ@w^>d춻aWO2 |65y?zk7G|9Lum @wf;o=dazrv=Ch=Huږn,xNv( |xIP=lj-M;h$<0 7 46c \P{=a,J=OS> vFQ=wj=NպѩFLsq9&+$Py#Ϟ?l&$ N>`YR&:yOp2uwwWyCw/oV]Cۛj~d!C.*-e;9duU 92a$.vKߞgx!VwۚV_>-TI>-5=ۻ!h11G3(jO#G!tЩbmc93(lW>{w>>C mv6,{,ZHn7~/[u?h]nS_;fJ3˵DEnsdjjd!8)d~YlȬkVCRQe-=#]_KZ$9 ?FjhJ%%dd}.W*W#0]uD-#E(bQy5eV=F%E1R335է^)"P>"(C{t[TE-UFIsatiWT*FI%?jkmt2:FHܷwҢ@@~T=yLQY^? q1j?&~ڦ5SeDzFI'.#xH~'WQR!D{Z&"בd1҄s^@҅b>)]baa_mW~0pzo') Bዘ,;i}:u 8U>']p= G?:C Y@ON`,}NH]κC{y?N`~^i) j;@Ѫ;F0Ȱ~0{- ̃Y=vfvTaѪO^sxj|P [0/<V4Ns I`ZJaD8xZaqE@_~yu%)z6$E!ʝ-^N&37W~]ǗvR: D?)sS'%sZ0ٝ*mQ`ܕjkOTC{ vI^vx&)u }*J>?ǁNV#nOߍQ _l?9@?'B%0 'ӂ*04PC ͟3󮬟Զ8fSs$k5H rމ6q ,&\.s9HVӞ(Rw}ꟚŜ"@:tD6ѕ"(DStu"V?Ԓ*? FOҿ4];dJE7wԖax&T D7XSNB+[l?8a~>¬LtM:unko= [,] F+Uz+w`]6ooD2XO&㇟vR3X]!O/(,Is㜕HWpk0w)%@" %.\9d,^g!h%[g,9X8\jBqf4jAr\\X.xc `~=C3?i4ViƔȵ3f`EKI&R[&r. %BA5 [ ܬL9ɤTHslT8jP9F` R録+]RSm\,<`|h727u|b|?+;]|mb~G߆oYg (J*Qe`\: ׬)9zۆ>K;~ކ~?o?GI뇟V#Oq%#bKslvݨgսo-l{oWf5>%BM ⲡzVۂîUBhD Li4vq=UwE\{FWv_{ܽƋz^Wʆ_eLy>[c]€g".[ꔤVK™ ~ &# MiE䝨gZ quv1lk3ɥ _] xKfRb+if3,٥Ō0tv1%hgQy*zv18Uk3L/.&_]̨Xbb6ks./.qy>m#XX@"%&05>5ɀ(F`=^Z؜cpD k+!/.VJ^] ^]XXG=kk-/.M/.(.Ǵ9BbM4a?߭W7 °oS)!3ei 3591%zkY g]]kY$HK}m5-xg)k9=ֳfH ݒ}LzUg5mksXnŬ-;d[g`$Oryge|,{jNny|wY,Kw_ˆ®r9_b]z&EQ߯vm|;vޖ//vz0-װ7~  bK~4[qO.'re0r,SJ7nTn|1:c4nM)ź7BpȅOQ=s} /uXw&-YOIS՟vOBpȅOUoܞv!h7_ 갎M[s֒\Sjjr,S ~O $qAv)iyo?Yv !΢>< |o_JQiDAv-iJ*Cn!8Yԋ|.v BbPu|hw47_PS gQD>ghOi\uDAv40v/iBpȅOI<}ƥgA-sFi|Q/f_X3 !΢><%0|K&|1SO jS=7p ySLw.I1cnS0,)Ɯb!%ݍ/\J1c* 0,)ƜbA%Aq1$s12g|1fpoH1sPIl|1fxO1c) 1f{̶H1拈1+N1f(M1c* J0Ƭ)I1c* Ř2ŘS9$=BcN1怒[dU%`wgly ^'Aۙ[O?aoL%eSkY.Sr]Nxk...ZyV~|6߾ŗɟ~?;/9gY*wW/t]]+G5V*cAW,׶߼,¹1DQe '`Y[6P%wmI_A&U˥6vb"$hG~=E)I=BK1===VSGz!y0ⵅ؏]rJ[h`]`HB+4*`5A!0PG^`̜J8B bd(b<SLRb RhIۖJȠ0wN(tc%[ +`Ξ[pJ@bljdTn5WtOsMȪNUOtpexUs]p&hi'5c^73-'ňX*L̚$07k]p 2Z_UdV)UYI(I_w '4&b|*Վk:592qWA_#ɼC&|-/^2F& dJ]_v樢ryCA"%L ҒYY6`1}6j1e ii4aڷtr=gd}d>쎘L}`4EMO*k?.TgPE(יeK!pɀLИsj!m ;4\:Bg 6|abhμs@[.$` ĒaWD-( lKCw+&&-(JįmZ53?R1$i-9{uuSaZ>D;뀪i "r}OKi17;of8oЋo;3-~g4 !ܟ\C[ 0`.NRt% \S͟yg*E qI?oH=ɦ:-Կwg*يBȅlC5*J2kkGXS0E|PXeaq6P8UFW#{ V\#? #g!Th38G.kNy-؇4HFUW2N=JЈb!wHr5U,PmF>1VQ/% p~0Za>i5#i.L4)QL9@?,ιI:0W{,׏_lh ~0,Y-J' 9 "IFb<+N;?LyQ_`"/[o^`}`d |~qElAnB)U~[mSb;c]ْ=h-ߔc?ݛTR^6ҪvZ㰩ĖH$其&Hrjlf9DNhKp* :Xǐ!VI0>ryZSVTQ$7ZԨT LFo+r\?b)%udˬo?IA_-Dww:C pknaB SlWi/ERdګG'43ʩ3  C1xG0LX, L֥4QKy["4tE/ʕb+ޒm8>@߷~%`L>̽߳y U7Vi$x_!&^ooזui(v `S΀(- PD4Hj##5X0з:mP +\TZ pCpNFhyq,^,DhoDU,n6@'";ekVNnx9LU^NY`pP_g Ay)0k^< l6l {T f]0Wi]j݈%zVIʷ*,akyBvW f |U5يU˽i#C!tǦd&9>T#nSZ/.-UHuMoYч6RǨ@jbrZc=ّ*#sF>98/gqgj-UkCt0ogoH1j+A'- ]Zm?S ;w59ت,O\$+&ߦ׉JRZ S7qyP>}8iy堪v{`8K>K_B`':*k{70 lv \)C/)C/)C/)C ]1QEcEY?-LQ|"qAyۇx]m_j@ۗHn($"[Yk!($ UgAO"V:j ";4^riզm!rG~yw^6H 176`k٦=!t'bfW}cچR!= #"q=\ڪAͣ1@aB9w"Lx[Gd b}]+}=V5OɃw.=\a$nk"aتs63,OQ9d00-#sHC椃RZ~9rb;NPNV &[|`>S6\cdUڊ6ynhnC]8pB AH<@ I;r &x^^^^]"wO=*,3$zhZôf2xDƉ/jxo EvRHJSHA 9c"᠂u֧dK VI# KFؚLl+ƜF5a @ŖJ!+ "N(b5 dL=Kכ@¶aXF%Cl%C֘xcMɿ!k!}qrJ`% %ұs6M;6{oyaɆ=MEHW,b.)cFx(RQ35rk+Bi!o!rLS5qf S8Pfm)k>^d}~zf<7R A%L Խ?B@5{>NN͜y F7oj4mݛ{ӶL72oӢ=mm7|0Ƈk!wq Kz2vB8%rTtƜw:۱7ZRq4m|뮌qsp,ٝgw]I&'HP|+EmUx=Z4l8^T}mf9hҏ8, Qt>hCZ[&6 A"!I C]Vq/^11{UՄ-%_t<{:wg~w UJ ۟__=vSmi3rzGc@GWUåwtNVI޿{b2-x3+[ 5=,]~U޳6m#WTt:jڵsUww+g-H%i,i8[߯AI3%YHJD˓(!9N֯kܪfLqmG 5P)5 ̷x=;TNn\tfp{-יnV?+QBiY4Lw[-&b45 ˬ e㣺+m+ώ|KFWKN ;? P6?? ín蹸_;nj a@:s#ص0ςrA6k:@"5N4oZl׉S9T%yL`G%V;HDߜ]4dv#,̷3Ip(_2mԕN +}"}![_Gf&]>fa`a`a`aM# TҔeҔ P^Lw0e*XhvvDPnGNԨ|Hpo 6Ed5K#hQ'np W*B K )|%\7{= BЭ@t~ qq]3a4_̾~\ @ }>.GbtۯVAi[+Z] !44]y724Y@sn㜒Y%|nLHP2(4s(d(sEu 3M#2ˁZn2kRQJrw3 sE3)JhIUc&s1A &cJJ: qϔ`P\0<)hmq (Q뜩jwI.R nT.]Ιb6Zi&33WZVM4.u Vِ$Y:7蒖"0h }8H]5vr cO_z7e:޶Y%~)ZaǖDܺA3z2ihz&wF(cxmSӃyf= 3S(%+2CN*@te^O^ ղʿ΅!"V[ݜC\GS$'9L#di7K]*Jq3d=[|n FQuA%#D6WjP)o/6UH]y7Y u ^D w2М\ 9Ḵ8XV9uC-By¬1/&qwi3FOg$3|xCK\II>IVhE1+|uWAxWq=rTBRzw5(ڱ ׍wUkj4է#᪖S[4t:C%tDkp!(A:^XjcKJA]C@6ʔ5G34xdkkjDiMv㯄O*Ęk8Ece:," ~姆T$Rm!? 0`ˆG) #9r4zzn S j\pf6_ɏZ)zYYe,W~f11wr2zbF89c…hO|:Stx^Wv1 c0ԃjImi&c$Ql(`PN^Y5V'Nz_^ %aW _5 ~0Uò_U6 58{ c<3)!]RrxG4.6N`h: 0C.xka~JoWwvh2g-}Mܬcd@p_޷5\qטRNF{XPIC9σh{sFШfAp*,ߡ3\J_:hOPA ..4 XPD i$-~h7hn${YWs9ic[̂hDSSZ ~ Ra=]Сfpz!Yo]cJ 1z8t}FXܳJ{;/673"Y&Lg_xq;=&Nxn7ys8^k%Dx$|`gν>j膄HGŮFqmJ2GJ߽s"$SrtuORt{AkZ}??ٗ"J!(ޣ\T ^/*c䁮T& ZFJ/nbx')^),ozP nvj C0YY/͇Z3SIK+gE&p! dwiK"1qg9٪>lqjjՄ-o&wo|g-Gݎ&UjRn'n0F'JQ ? DaWe3( ,É^tE #C$tŦػ۫|W8暱7kfۈ1R<܂GY`COOMϾm5niNe1sK@Kޑ[Cqsףlw#*] /#Yj)uD(IeK3aK B{}jӴҍi L' q~?Aݶ$8j-5AX&޹JͅTT!:ppx.樃u!i 35ZXC)m ,9oQ`*BkL͕jD2NAQLx~TS$/uゥB-˗բSX^ V-V!?{ȖYJA[~A+nYcٯ=d\?2?6Wոw, iUMx6|q1i*8}l_:IJ}~fJz|őjll,'vw ijJ-n+96⩒ƼTsK)*[SİN9Hszn+9Vx*@A"g|:y5 H|y;^}iFj!:[CE/{[N˓Avm_ _5nƣf5ME;ݬ>~J+v墷zwӂA#"UHspZd 4RN(e؉02!P9!y8oB(euƤκzb#{9hf0iLZZ4l2c:VkJ*E>OnrR*P9/욃,ښS_Ho} /k<IAȧ(eyR\B6]txj*t/n1Io^qҗ^XTMַoؔpf~XkYs EWum|S(p޼ aaRx} aul;՚W9p% K:x{Lt "$zҘuc !OE_wĻ&SoK,1 SO&42^ݜ>CBn({?]9g5TJmBk?cxȿۦZԀ[bR;Top5٪*Q_[[W.-dJ1)؀R$*]VYFT5B͏ m0y5|Y[ E,ԫL) s3ui|C0'2HzA8z b}*@qJֵ= f%w"9D&nl:ӞTozsC4oIQtQ&2_6tKZ1<׋++X7=[1sMLd0Ck2q,hcՑ*f0GPf5Yi ys`[1RI"\ЯNayfzŨbqMrĵQF 3.+Qٮ۔J3`skTqyMuyZ~. fZ_$a%C PRܣ;Jni{w,*sRq>3:+e>ndrI7#:/Hg`qswx.\'̷}H[#CIѡR UOB(.OB0<˕K.ՁL~#L f*s=B߷j͵s矓/Gīt\ ȿ?<٤R@U/?|F_JoZOz~x_ۋi+.r2-j]\Vd(2,4^ڲ,+Z~>r~;Z}۾|{ "z-U}G]h'˷EZTU1T![Ih " V.V$e$R^a ICedFI 99FՈ 6`%uߟM?ߞD^|~,ʮd2W]+n{iO9Xs\==)ϮF?-ԨϽZNgXG#c|^ A%m3A;~xR?=tFvX۟f3~75WfS͵ 2Mf|x46[MC]F?.6p*'6ٟ\#gLK *m9TrbZSH\\tJ*! 2syIp6c%˂X)dTԬWǁ@K)Z_gI~E GX2`⬚ԆcdťP,Ri`ͥ $DSZVs&Xuy NAHAG.1^{mGy FuZQf|*$V @Gn]1(~T۶e#Z]RS5g[߁:\m:P_ڙ h)ȅ֜vl:$P*I]X';ݐC^Ps7%O %pKWn3bӁw fpX\y6g q4䌆HF,Wǽ97:j\!ȣo!172P'bܡpɖ5py)cRI>AN.Gl.9tbV#XȜ(-4%W2uL>1/2<nZd rʉ Bca<J(267B(DY V1eUy:~RSQ(s[ BrCI2ϭrLEDO0Pm2aiM)玃n~DA ~Rm)@qʫr/Bs˹20f&- G9eU:`DRkG'l+A1ȴAJ*%jo7~Jm܏YtJ)Zѡ7JoWey`F>f|DDvZ0tx#Vp#ڽo# zY̓KΙ  9<}?8fH 5 rMst%驄ѡ/bnN$&ޱ6XCsN ^.g'uзY/b@3vzes)Jӕ7 rs*WUZO!/k_~R YlQ/5E)遹`BHjx:(Y&|;]Q9*OiGtйp<@hVDHk'_>!gĐOj#Vr}]w;AoXׯyYcq_yzWWZO'*0Ep["Z@F,-ub|bN$Q vt pEan`jjzg>pt"K HKYV&eʸl\0zSN]ǔ5?!zEٱz٬cq[7N w1>ԑ1= WA,8ݠb^YWn (5̽GG~.Kt .ӛدG~kZu߸/jB;I 6X<ؚZGs}=Yz\KZQ@ۍkXL-n&܈.-TYe S9$jFS߬oARѰ1):6AJUujn|ዬio. +,wׇDOM0r˺'׷t?]|q>{׽.Ӌݭ\-no7\)ּ?'E:,U 'U"\Iv]|ddZkro\M}F۳5{cBk1r_5+lM@21eRF5`z56š6.bBIc[}7$vJg)zյ1w7S&a"|G5'W׋eKIh|XWnq㪞"3"QfJ,J-DϨj_мK;[L'w^@w{y b6٭g}B*R9Ҏ/ '~OݵpΖX/-5j Z4v<QSjjt"--347a4vV._, O /Vlx59<1`t1=$PB/ok͉ߗ@t$O_gѨ5Yq i^Ei"b4CyE6iO4cz]bCؽ,Y$r eXMvµ,n􅌷!}m5Q!w{ޓIƞ8^}j®%VUkqBO hԔ\ RM=,t*hk]ȏi/MÙaݻG|?Z:}Ձk@Ւ:j.0zcm\ٙ[ddǽ(!tj錰cYfL0"y!sT2g6N:nMp+SY!N]6;awnk4k1Ý.G4W8kKRsJ9ˤDkUQg%oZ }8YO!5#]oGW}9p_!6FI$iN({W=K̐-Y%r8zuUB /I[eo`d/I;6߮PAn3 Y:-Y5}!@.Gńi+e"%^H$Y>juaVC>T,s07ObkkiO;b?ny4f|iR$Uv9'59" \p(r`=x)1mZ!/03\#%tL32S aW(SJHQ)C*T 2.Kti8 d̑}u2@V|Wx|5FI{j[']jj>Br$%k(R4R gSOG÷fkbIDlZavcݐhs^ {""_,`EP @5]dkESIJWybAW@gwc@;TMch!ǀ6[HQ:MKƓ#T"D- N̋#s }.f?5FԌIKnBre :ŏn?>ُ5gی}#EeS M)=^& 'Њ}-qS*,ԙ -d.CLVU%/NV4Ps']/[NĠh e5~@IA " i֐A6uUmIŠFF/-BBGP\pWܴ{" =)җK -o/Y{l+l`pߧt?6lwsK:sGēBēBēBē:=""M1!g{Dg;AxM'+Ay\nA6z>|;DCv09V^X>ųe:`1m:[Y0,'B0mY2sBbṵ!f8RfJa6:xѹR~ܟO{4.5;؜qeqoLXqx\(N3b$\eÍQtLlW:i#ՆYȜDd6DFˤ=lA \j҆-D*WS0$@FDFb9"sv hteJ7y u$L*og0շkDpG%75e>cmj:XuR_6LʹBժ5& m#iG._#i 3 NvHg^|v>uTg߫_`\2aVT;^mLSs]2jEmfWG~Q;r@_QqHĔZEEk,Svv\mAieO[wl d{ %| {Iz= 5mۙv܂~Q[V;|@ڴm5r*V\Vfo<'y $@m1Ko}oڞ+$rZ.4GG%MeU,F NGXt u7L,i{g ~?z73V >LW2T/ITN9J456GvڟyՕn}S~`̥sם61HU 4|8@٦lOh'5tuMG&m^+Zu?騡tCqS\nXG&m>' -:OtCq])lS8nAdH~ %JwVcUZMJL6UZmh|cۢqaXQix! 7rhL0976/h9<]7dB7oDO-HyfgaT }d@K޾ϑCFZ"hWBA$Ե^>6+SB>v. rvoyV~LiBmϹ]H$xO,^Kqû_ޓnnN?\W{*I$7}d`ZKBDAeXCZ}">JdM NDbDi?5$u۷$ w/ Y."Ij^$}NAペ_^SEH\CT>.;fV5Z r035 en>Ka[1`دnKU,?lYDҗŔt/K9kƟ? m5 `/|.fҬdSI(O麟 w#͡Xc>?Fd\2ԉp]q_O i2f wI%:.oW״d}N^sF+9e0|xFp5ZZ=d]B&n")˨oƗy/]@g=;~WMDz;Z#þkG;vc#e+DŽ$+k<l: 4[Ӣ@\ޱt-K, `A&4)nZ;Iۢ%+sh.A̭8 i!7#];c$]aGt`jo9 ˺P3} xh1уV2t=rTbol8{鱙wzltNmuru{NoruvI~=\mCQ&>eWãݎwO}- O@_fQRPv fȩKݩ18i=["]Vo4л`tzQ>o7޳6q#WXro]b+W[٤n VW(*vCJI C̐Lˉ_ht7rܶ#e!zwbМ\A^L]ѱ*hTN)o\Y"Etǐ9m##4hcn:+)BsYŴ#װYPf97^%>;o.2/> $W/Ճ($çs"Gmئm}Y.o[ɫ=ގ$-.ȳ6̋Eo_.dv5nb!G㩅fH˛t\iG)YQJ~K_Q*7G&a#ĝj#Y`ScI# Iݰew<1 @-m:$$]U%nyw+!{q8~#sʕ%OԒseqD ׇ$*EMo'ՏysSEӛsLۓQu>Շu߉Nko.$Dx.yq6,&s׉1 w"OFs _ZnqhiN H)+g'O\wid5Or,0Lй8yuܷ//J]VnFb{p*+RRvܢ<0]pʋkrPA%=/Lj&&rH{l&K!g(Ex!][NbԚaϟEJAluj)adTWqш܇ nBnM?䳨ϢN>:oJZyV2kޒI61U/m<pF+hP!F?G||xMzt:\׭U˥QBGv#A(ԫih14t>8:/^' uRVUE &/@Y8ƒpfņ5ZDÎHY5ɎayHb]y&! A(QKczBQjP$Y&-3Φf[X" Xs-ڗIe#qK<pؖeU1%lB#hJN$h|bMͶ%b%^foE$EPzYÜVUc܉(o[gMLB1NR1<)ق6I[8(JI욹<2J`"㬔 V@7C{QVmNOU9'Hf<lp?͇9 g(O?!IWW뚍б&QI[2j$ӻ-^\llT熢c΍P]FhIV_x.#SV(=ZzGed. 5m׼b^ jIFF %j>:H.yѠGX^iAQzt< f !;IA&ժ0_0wڮ}'cZS#yR5Cq!E2kLAVڏ; ]@E%GE{ ⛨t~ 9B Ct9mɠD[+"ET!hk!oew?!'7`WܔGÔ ƃ%VBҞk[1+Q׎P\?s4WSs5}A|q£6KrxY˳pgs6_r͎>=FH܁h |-!c5o![BKT_ɧKn$hBI|"_PTq( B)5Wǹ-Ce8R4T*W,HL6z$,C?L{{*~@apYq"慧sGQ2P  TiqԆv_ؘZLnXPԾ27蛂}x00}@:21Hcj&P=X>`JVZlgW;J!˘E17gi-vkT"uzqN K>d9ca8g,=Z7KV9ۚazuv4=GeJK/2M]APAhsa9mndƧߟBnQ $)mIfWYǭӑ5F+mjD˯K9Růu_p" f  wmW ꥃgP kO.t4@Z{ݸslFv?@J"`=HrE -Qxcwן=*Ri|I]2lbcF״4ꪽvs臋W7`|h=/a/nsȹOxT镣QW:7ι`;RjOׅx;?0 aٸverB4N?G$dى4(&^ o'N"Q uϢ5fRJ B$ĊS 5cds GdhJy)N"PVŷe#k(mFKDL4+pcVaӯP-ٲY5W Wr/3=%0{]q`;gj@cW; Z5NTn5@Ƴ/ew*մqq:tjh4?D6b岖YZf4]׿E7.k<޸ nC*y]m~gmw'ւ,2C+V i 3t̴Rւl9WiQ݀&IP1uUZ(bpݫ Tsu)x{/4FfAɃErq<'KÙsTFb`4i[gr8 K3Cx~Wuސ8]E]6ŔF2e{ְD)qY,iL}>F_zG+\5\Y0FHp:e#e{4RmӐm;{ܓ` 'fsk#;i nnpn#PE7Ͽ\قM-%T'[& Ya/ A4)P%Hʹ[7_㾮qZ6*ZRk0R3iȣxĤi WțxÕ Ù +&v2n1I^FϢE=2z֔fz`Z8ſt )J!Q鉸^RNxTI S2F'Xs{,xEӗڛ-8c{44[[0Jj|t Ί%e(%XC[QTӲ<,)i:N; >yĸaGlg7f-+خp0& gZOIѬB:'H@BiI67ڞͤfOiySI襙T'C={ۙn.mg:i"x7*%x'^n=+ǠtUY@\y-UTqG Eĉ*+Aeudcsb@I 72% (c,`Y|Y¨$?V%Z?+ (bDdyPFFendr2lo1j?WCfɵQjPDi%ݯۊG9rQB֢(PWC4bFD{+J4ԫJo_'0+%t!ڠw!&GijAԾ 0w.{TSvV'־!!_d%*: 3ʌծ@FQN̬0Ns0Ny c%^`x#kYn$W1k/#$fYxe]xc;&EI+wjQ,QM@=I$pp|!A!S%`{gj'x7 DȃqX1h`WSu|MDZh;WH`f#3[ ±/ cOz%$ 8ym/:FWKJRΠLyŦw>o<Ϙidb~ NmwXYLkܐLeX|,^ 9Oחo}z޿O?U6E&[/: KX +`yZl$gc'42:B̋1^iגji&?~ߡ@Spu{9=oiNxlߠ9g;![aX]}@+y[ 5nZ,tI(K$g@ll)ZG:V^2eX۰j^!8]C+҅yWޕFQg }OVv!?L~VAcW)=u-3RTy/NьiZv 1ؿ=3\ @&.п1*}XaR"cKowT^5&Enj4\tYˤtZFaלI.:йJSΥWH޹&t.ʙlxƈEl[R[\dPoN. xU?}뇎j^ ޗ .h+ea.a5%owSſ?4%h5:4߅!UZ oN0ŽgmM7F$Hʭ{09dUSm;;˭B@,x\>9 elӡ) l<旟x4C~MڊN+4u1 R^OaO7+ JK!\HaZf7x.g1`A#`He@uYIz|: RS]e#P Up3ar-pdl/s ֳzz٧kpڊ'n;LrT";>ӣ7e֌hFzW7ڻtzcBZn AR^6ض.omh$z al } ـf,M34 Vb۹IZ^^^f)%QCYri\a׋{ O./WY }K+)cS@;kG6{yS4J"SH& J*fD"y l xXp PZF%y'9a$G:co܈۠\X-IXZF@* _-AR 90Ɩ )`8?8^JZygqM%N(.滶1I>RNS߾/Y:|^U`bPh7 1˄YOV4qT;.8 Ɂ1^걛2ŠUslBlpoƫd I˃pheMk,90ƛN|Ovxju:Œ6, hKN9"F0m8vԎONY_55K O_ξ쾆9R4R.g'T/LBM/TFta\))_# L))~YѻZ$H#:cW^`7h0bӠJl%V"1HRp9rr`ʜR>04VFQUJU/EsK|Ay[#ȃYх%}0"1v:Ro&`PZ#K=;1HP U}6D,LBw^yyszpz\_,}F_xywc-tm^)Nä߷Oi{7#nH^^K >lۦ|ryX$kR>_O :FҺ& Nŧg"^'ZТ[Miad]N~΀%ֺ Ll7tLg4gz-Bi` zOt ?ӓ#q 6m·j?(VhX[^릉21QUg\@vNi:K\crCÉ-YXc ]_Fyڏ1ִtGJĀU* Ьf$hk5ZF#v 9y9$?pŌM$k2z zQT3!H:JG[L iH0:Rr`[k܄ U3c'%^<n-) >VOF~lCLW+j!_Hfq}DS6R6I+19/oZkV,%%Rk*7ȡ׉Ly=zZ]ZǨ@%ii:J 0U"[*.:NKsFr43?ce KKE֩uBO15P*ۀ+] 9<6@j%2ĺ[kKSe`1Bh'7Ro3T4 'G $Z+1' $ #O?!e0u 4@4 N8r`-V6 vIYz uNLG&5gJp6rhXWGUJjC w:^ӫQ39+>Wl.NK۱8ɼ+.)4b sp҆qeoVRޒ*wb0ɀҭtL3E7  e|փsF/p )ݫqN3Վ<juV,C;GNje\M/<"xU}}aDEA90KU;%1=δ{-~TQcl~(,ajqWpP~j>h4&x+G&(Yzhߪib_כLVkţY^8W":dhJ Ws4 J+pTJWK"I(o˱^p9I zñc"?{P 'Ga#Pm}_S>CTO#.0{`8͂JQم*B1R)L~ulIZB7Ns۫TI53#.K垺3Y?y|"Ӽg<_/?fTBOV3yUٶ3ew6E4[{ 5r˶7K,\4K;(QyWRe.4cLkw׭/8傁['ςyHl6Kj2[3޵?q,EsRxͭK q`שص5ODFXĄ~zV0. 8(2L3aܪA cY >h3fLb)GX]e$+1l>{,м!ٷK ZF GbkU46MYi Np5Ip;/׼"rTѻ k A["PJQe`RRBH>JpilK奅] ChH.$sgڵPٻӮ>vih+??b[*B\嬀Tbu~jdY'uB'I|398;=)y*4E@.\fdp # 76g@1νT9AXz73ƨzo(R'0!."I05Z)TDxI/·=75/ n'SwOdIn( tb EiΤ(ٙR'o\xmMOC"LXqZ(ڷګ6l 5#ӹP&i&i)_BѧIY&i8K<$P);yfEW TLQUjRX=BGoK. 9P=RjRB)8/tEX 5c h`rG\AsFY(WeeXI]7PRuD`CN;SQh]NqL qʰiA`eA}<ptkENMtXgpށt>`o$ c`Wc%eP% J ʽiAzG[E fhxwXwfOF ;FVܒ=h*`%J;ǟl<1NLq=@Lap!kEF}׃F^Td'DjZUNNݰr *'i/PxfrI 3ER_ 1FQb5|UGԗdSj׫VyGʋZxSLx;}D[\+s)ᑠ)o-`@^JlrH!'+Q=`!HX;Qx7-nZxm֚;HyUAݤRVvkmSO8qY &Q[U4z1 OH#8`GiPI8i`s] طuKGCF=UiG\o LhR*ҥsAT(#`YPOO8tl $ZGI#byL/XxƳ45<5^I ' hp:%V sJ%lԠv0TJ7BpiD'<E.f8}?uw B4)3ٟBhۛƯ{[;IQ4h>Kx3{7ϐ۞9O1{?noo6yg3 Ͻ^/6^*ɸ7>GE~|П\^'9?u2s?\l`[‡謸 %Ƥ.o?2tg'a|CAx|S1F@ 7b`RߘS8ODdtx^}1[@ zIa=J_8JOtOly_̝-6:^p0OEF݆GeIzϊ?ſ?d+Tٟv_IǺK%O1Uտ ꝣN?Ǔ^wB犷\yQ!8cRro쫷l9OQ~ü{qx28 01I `$od.OA>*$rd\X|w.^v]Jm;sPx?'ײt= v !t0IRO'/̱T׳]w_^lO ~T6G'v,Wˣãp*?gI`EcgSnjxw OۡB70 *؟Ivx}|sw]vf݋;Gnn3Ca/еiic άFH,NSO2и !|tғֵ>GEF $_xC693kH"62w.ds vw>MBuB?\eS/>?~ <"DHnEX46 a>lU1e(PGaHƬ<'< z <'=e0Bi?o>?tZo"<29n3YgGQ2i* X (N#pO3| G30^߮ r!k ۵M/u}vnjmnmV3o єꣿ9.g][gV,dBL3UK!ĞwEjb )%C_̆C^R0,[WITrUJruZPWσ{CA.-!3ifT0ꢧA8 #9"[mL9i]P9 <<.#9`lѴFyM =H)`"cQa1a)YW)Ý!M-AwjX02$*溊>4b*溚}_E[.Z\rYg~\ Dy"*QiJ2Qi.CRGix3Fk j514xcj;\eдFpi urJf"\EYU8ҦT$BA{xD#r4UpI>1\mB`+XAֽB`+B`+B`wj_ "K"K"K"+㊇_Z?"ڀ9;Tء'vMTPZ&:6*X9K3(Hm$X#S6iAͶjXx}(ͮ~+^b*{)8i {[|8-Ug(E;3-ώW?+QqAx48I ( !$s ˭wb x8Jk, G,5x$4xl4 ЊQ(#wJq-ـܼڀ hrӥ#EiuVBr)j>Ne<徜'LWGZ@S-to#Z-v$. I=3qFfCb[T$rO&3(F"t^'"Fnhd&D CxH*;1&Ǣb,ݩ~۵F9v^YFu$ V(jr ^!e.KpY"ʄpU)2sUJ9Pu,H"*4 ׊7OfS{KoPeҩm<,W8fY}F32}FiJ8J #u~YIZ`( 攃Jii+ `P ũH|$1aBth^ L c],&IԬ,Ҩ 7MĜ0pǖ)daeYRVVê"ܦKM+xmNN]X%pSG)y!Ix4m}ϭ=.B! ԀoFf\?ߟ^(hw T"Xg]mo\+ }Ά3h܋ Mд yH[7%vRJY{lqvsfἑ3cSZ#AHi+@[U90`䢊1L824{]&'obhriRr Z?l/p ɠL헑%`G_Q(jv|qrur| K sBpA A #CA BOnGGa._Tp $nqim'8=O ŚWP7fw'1TJd%lZu-JV,1skѧX·j%|V z>}9|&3zoOԨ7gkn8 YNE0P|@>kq6Y) Jn![3JR41#OTK#MT].j<qBlHTlM&gG-5JAkHT9|rJx@^ćSLTn`mb!Q\Cr|&@Orԛ{~h@[j Dhdݶ#ʤzd[+[7hk7l|pXݎ* eg}DPW.^zC=uw\"tؗvE'~2 .>Q{@( (M 6&>3&Xb/YA2oػ_G՟Ϗo53{QlpO9?}rB,WSu;qjEU')Б+652PJ*&8k4bq쳫X9liŗjqh!=*¯?@k/Fg AUUDc;ރԧ>L0 $+5(+s:Em %bUȪ"%ѫ[o9n ̽e4wHX*D& X%7e9y_>:vQ 'PyDtv~ kru]8|w5N0ԓA5Pj^:J4cQE@)Q9YAQ}&nnKom%$eUY9K|R*'Ʋs3]=* @M8($za0{GM=XI;,&ѧ+-[uw1Fc5LWjk"C{ "ȶBkpIN7H(~l n@8KW'4Ony/29 ^܏] VFbQ,lۨCUa 1BRI-`T ٤Vqi ,nS?@neZo\dSk;DjRIj<,ȺD@4db*2u3M Սp3LY=^XJ>~?l2%S.PG1.;/C]g$BYc& "uvK1 LA3fpgvazշ̯z*l5Π6h@5N5!R,DJ(vd4؀Ýّ%CS3V*yn5ǩa tcU sM[cVC5\6!P6)@'I #QFLv2ɥi,fwcȺ xiJ\r^5z챚.rԸd۵m;oL8l"CCE⾁Ζ _l4,@nM1Gj_'G&?pv؁{ro+'$A9B#IΆ$Rlm }I=+eRRXA3HR๯ǥB(u ZFCUo㆞X{FTI9aOsw4y۾Wo_;Ϳ<+GRFr텖!0ß}st1ϏWY5"hn;>9Z}lw, @0n4l0$^\֣;yi} Ю2G)Tj `Pz%Rz6hm-{Hh(LHysrKøF4܈KLu w\Z\:ָĥ1_9-5dGd.n$!\"K0-J NÓpiA.(q)Ƕv&*P U* KI*"7IKH#x?t$; ID l<55KRT LM:p]TcNUZ>t[vc)FCR7HF󖻃:m(`z20w2lIJt}FunÙHAQp26f:6кBoȍ-{ 4pDf;;RՏߌMNqBS_ sCj/ HpC:bZGfZwE=hhrw`QkԞ5ձ;4Upx4I J tĤg*<ӗM᠃iی1fTg?lTym?whѠ]uvKPZv&PrgCH2n*eEwgTS5€o7P#4*Wջn|ӪShkFek?dpIjqd$n?wpQTMJ`&rǵ\؍ eqO(l0Aex6 7QAM\{EJٳyޡE*| -ěo}a?Qi*mޕW'w&.QJSw CJrQꯪ CVB7΍F-{]p&S aX!wHak=Je=4:Qo:@)ҀniFم Qm?wpQN+KL {^(fT %#M&p#t&SF)(1 ¨A@@y1UˊO1yܡ5}FhwHz^!c̆ǶP4qդuTi#ҸuLD-CcT%4[ e}ZuAOH;J%Chlun>xwt";úQxJ1j_9 grEOkl 8iHxpwpMǪq@/w8ݠ4UXFo3*;8lK2tZ_&MG>N`#|ɫ Ch-{A&hMw:ړ؞|-A/_7A)M*YW93NF Ɖ<B;-*}hqqvuxe/Ce~ܞ#EHVV $5X.Z t.ᛂhbgf7q.ڥs%4tT˖J{Ri/[*T`!bT3>@eW4sk2`Vyfz,J,mbꃞ-~XU/HU~QVO˱ 홋C_^3]+SK]fV\1kJM^Rv~Psн=>O:h3hTS+LBuq)gkTS>;"}`Q=1rOlϖ dfNuUOhտV$mƿ'J^f(xrrJ_%F\hUHa0-Eeэ^V3֓[0N0;5Eyo Kl&6{_޾iq}Z'J/3ͮ'M!g"_L#@i7VN3ÇFPJi6ʦOAZƘK=No50Mf}{\{KL=] <'Ao|f24 h\P:gYeC9lI\X6-c.8X/Wo[bZ jU_u},o?TQ۷*c[-tsi-Z6,ή.^]>?r˷Ͼ-Ze'qlǓe~77>-3R.Jt;j_Oa7wj?WCp+O)n`LϏnp*b٧DZo󣌦ԇ.O9g5[A_}L$[7/rqu,l' 6KowM44!uw}?}̚ĺ.a e.8J]Q0eQZ#DUL`cv ZB܆u{rhs)m`Zr sI!4lޝ$6@_"U|d,! 6CQTnр6C3Qʶb!e=(MOvMOC'u'GM;Ё]) )H m@Xm j W*XL}ɒ׀^Թp)9D"xLPjR耾'eq?#^VϤX(Uꭢ\z3is}HXk䷂MEMQجI*9}; Ko]Skl==L ѕZbs"'Ʋ.ޝ(f< }; fSil=>lI`כR]95)ՖcU {[/Wu*(ֈkVkpwpQv'` Hh'ͶYfT?{FeJ(NuXܞ;:`5!Qkm1L\{FU{:uj=^r=+-wth+DHhF3 EV ճ74_7N{ QS A_OݟEZ/3&z̤qJ{3݈5K1-Ԣ_B̂/t݆ncZݏQQ *_7G B/Mm}q@M J.p42L=p, ۪s,NcP2̱@ _[?* Y ^sTe52@_jiɕG؇fi_=lZj?$~HpY6PF)B5z"̅.gMIV2LL/+Ux/zذ5R5kC=.09VhPz1^||[LfL{Rx܌[s7VkwxbG72q,jgSlFЭ3ҤUvv2C5WfyÃ\}9IiL)$ й5r9()Oϰ,*9#e! IYP 1p|N;s 0XvOaۮqnE`~v=v4d\T&)DO\Ѕ0O\V/Uޙy{rE*\s#5{s%noW'g[abɊs/٧äʡ38Aدx%5~<yt3Z.~>M[.ӄ>gqiy6ٻz_?&//N_.\N BZf\ڗ;"deyyT.dmfաo -ܤn(m6f)cY ^IM<t?3O*xEHyw ͋<)YXbӣp)3S9ӍEsf|Ézyvz9?ϻOӍu:Lj3ȨϳNfΌ;V$|Lp X߼ ̱?M~EƓ;g DO?0sn{nw0>/ˆdW_Fl>`={p;ѻ{.&;g Fw8-.D:ilO,*,DjlDhTN@ʘXE 'oV8q)D\H{ΐ⩂L­iʥ삄ffxMzo;AqA%_s:||CL}mJk*3kMRj@0&G}w렻?mM-SEF#@H,OH0' @JSL[}{Ӌm5)[`v ȔʑJDAB~xҹ7f!"m":=\ *h !A0n1KR=9ycܹ"`H oh\0)p'_^LJ> 2K{?w>ber!Peg EYOF] /GjVCK-,VY%՜ 0w E TvGf<폆vKfn1n$2RWk;dKSJ#Rk%3>%UE$amCܠڔuV=j" =աh2)Ѱ¥&ܽl jX*3ٴn8p|M P$T4@ddA;c`x2 2xapZF;ֱ_^|쓍#%<2!-w{#k9\+ϋD>C*& QWDl-D(mXg)* h.#bYЌ]4C<(xd@$_L 8S=\]`N'wavsI1pW,5e`RJ*F"=[1C),Vu믵IO%jaS?D$.Mw륅$N0˙$ڔJiwq(^iH(;"lqt9B_4th-*"Ϛr>Rx莔pU2*$Bz$ +dBԌ,*=C=cK&$lu{Q!c䴢8jXt#a!GQgQZdNWԴMZ~w5$,ဈ@{J*UA98+#!FqZf XPJe"`a Mql D rkb𮲚y(H Eh Sb9(lxm ?|3ۃSN?O !2PقTd ӄz MйMЌvҥnXXLZX PDI&dīk;! d0|ۅޤ--r2 (ߔaQ.CbimBT:0 i*"x.G/WB4Ƴ Y<^,ЅĠ,Ծa2CRck $MJԢjGjfk$6Jhepx\J(.Sw6IXAX%](pH9M8Ͷ6z|zAdͪ8|I<UĔrV-[s\Nm\yY%E+D(02 Q%L<5 tTkZNخ-M" Ur͠Ҍlo%rE6pF@>{WFskWq 68o8mHk'$j0>;b]ROVee"xӷ&}lCFKsrUL ` |o@u-i)|wZ~;zр.I98` lŸh=gPT0tAAIEA`5e.ƴPi*rIL#TF#uʳx( P$'K?r6ׂ,6OJ97|4fYd%Ai0:"(uxWywq@Y-"7;Ic{sQ(*JK j[$C AM QK! MQVX8-&( 'O dpVUZc4LV4/gt8n~^(#6y<(HP>R(@9 ^"+b!zv65DJn:F,jYI8F^XAvA"VZKХp-V%l, >zhgu!UL܌bYep8B  hTBJPFB@C6`F&)fA<Ӵfc`lᜤ(:eV(sW9:uA.$_h=SaZ#GADg =)u R+MϵM0BEh2I52EiER8(Jp+`B)ڴ62(HaDL BfJ X:E0h@*40/w҄?6NCx>&G ZZZEA"WLF *LAA)544U&Y@ DqeaG񢪌wiEJ@ɫ"KT,yNs6᲏CYHsA`obbr/ M,ar w |x(]scUF[L*JUJ EiuFB {S/݅Ľׇ?ooz,ݳA=^C!++jTUO4Ii!n6op(oJ68^}5CZp>8q 0JpNLɁ>\(xDg>(?M[ۛ;t|/S :әd-s7t, 5S9$hw:\X2& u+/v6m]ªuz̃W鏳N?#n3| /Bluqk^'-{w>n<_)afXM.o=9]I { >fwNcU 7: ^L&{u>B\؏˖-} ;ݞKdA?|H'ֲ6aN)@i0hNGm 41O S?\į>߾8:?^lzEMFgd~Zާp}s廼HwYGT{`s4J:׫TDz{6n|/xpw#:g4H yއzz\[l"x\T)O%ߥ 1}x2|>kCRQq'TP4H4H;ͺO3\l/; 6w1[=O s9NRgn^FSrpv1x7X ?}==vk_6=B~/2<_;@2:*hZ?Oipz4\FາK&U* vj#NMlvkדРJ^N~="rt]&d[K{CqSXI[*-:Zj]@?TA0d4pjk{GZ"ոʛ+cx+ɿYslP97rp;[O0#n9rL;z8qe <ګ*ªjڣj$PLxe++HȜq.Ld:tHON|xiU];zmwt) Q=-\yɤ%_ԯj nA]l'S#J8#׺zEȧ) /ـ jFvٍO`1{9oqoӕKLm,M4 E];Uv ۅg!D_$)yrG?;Yy\S­+#=18. !ep0օҥȵ. ky|U9d!VSDSETMZ[ )ר" 7'<)uѾ[S`JLIK})<ƵA@ӹ%~Յkvf~YSSP?exS-(bkQ]gTTsNE]Y,BN|c1B.x,Y}vZSqވ 0;%TO[ m>)F@..Uw]* Mt(,ד>K੢FE%2P_T擹O(x#ڟN`_f35 ŏU׷FܗLKKRܩc[x|c؃&^YZ0vgYP561 x'[q`3B5Smwdy>f7˱豉w%㸭tZZ^VJyÔVLDt\bwESRl&RTvY ͗d urrm C|fBRs{ZuF7T ,„JESi2r+[glVkN"6Si:ͧUack ƿmEK2'ZEƨd/1t5q.Α_}nR8nboaN,*YrhWL7^ءz׮ϥ+.0ܺKsE_ RT0ХFM (FFwi9sS1FgHHhEC_ø9z@#umK ]{4脢Mʟ )&Dj}* 3BWo :SJ5:#>_z)mc8e꜌(\mެԘJ/O;Ćϯ&W5 . 8l,UP2(o_YG߬6wܛn]H'=KHq>|߶gAl&Um%RV.׉=b"}E]oƬ<70\}YS~?kg J/3ز{ess͗75AM&cͫ~Q2oLv{%[i15ҠD,@s*@L]@*</*68uwo{ǕREqϸ燝WCi#=XKGsgmdÜ9l1Gx0OJIN*0.+R%qS3RUE/$ ԃ.Ei|_a,=ɝoaj>ypl}|xpjNb!1V6 ~mp i]0UЍ翡h-Yȷd!ߒ|;m!GwN64Qj^M (RRȣv;T REESDrjJ٧|ĸ3eުZ[) YSdcO阝gS5+kj|+10srϬqg̽톍,N=hʒ:}+щXH~vwqWP/v!*3?d*D) BYNeQUT!7AFN!yEޘ2AL| >r7$#NK.*|+1)h W&V<(ͣ,B #-7VɯhiC)y_K9!ͷ[$E=URӕ t[w.5D]ܴG. 1F<[2G<m+_vi'+]TO:VMynEcלXcW\r{#Is3js VuEa]cp=S q]]dbcY ,cJ{W۶JdbMӢ89M朢-:j∲4lQ)-M.E[Igvgg̲#BHPlYpTn9r*[x#] y. 뫔RR}c>kY|֠*WI( ` D! O>/ nEq,4JLʐ̸W5g1ק1\M"LXKS,8Yx;qj*/5 K\}6 l̖M$F2mMgP{65Lɫ@]7$gg N::In44W dC\}] n[:RY7n5+9&)oWk ;J/y"ˀ Sx#(w+}UݩBO0dCzHpd*#T=#g:#T\{*V? S. Ÿ}ds*# YҲ a r[X WeݬLɯ;Ff(\S`乭i~Pv;`6<>h k'(c ,|{qGlOUNqG>wP>YgBm |f۞> W>"Z{che[ZX>ՇV'dyc}C/nG۫d~FPe *$SY:mq*Vؚ` K!,S}?m?5RnCe3F Hv2K'-l7Z0iqhrBSA 4{.Sq|c0нW +Hr 2+HH'QB9LBq0&߄s!BC HiM/Ԉf[C..y Z'WkF!(-[*[ZG^"N=E-zK'IגTt5_KVrwu34j6wQ9*(5 ?ĤHMF5LqK @c̕@H_I҄Fa;NIc21:N5R'FOhaT*zq\ߓ$w{d e#e °d3$Y,Ajp ,Ce4*`:'*ʟrJٌ_<(@JSd` KSJ%f&֘NgMӭЩp'|34vϓ<{*'ᨘ9E= B[]qkc|nH9LhPɉ3mN[ >cxlPs?)k }.8eRHѧQ1tx{)A!.AT\yI1+m{7S?hC?4~P,ũI4&1 `*!Fe0(8V_̀#e dq^ş=R2GY[3Wp k3<yh8qP4řg#%8$%<"̄!@$F(cP$Q I(a*Øܦj4s^3W"]6 $64<tK@tKRMCSN:IZhAmk8)&5kL߻$jֿUpm-IFi19H|`Xk p-v{J,2V[)+v5[y27W;#e 1IFi)@յj('ѣPU\&ug) pGIGcjVh@Jk<{WV/vw"A'~W 1V!}jx䫑'~,JpF u r1F2jP*S" P#\QlB?2nZS !p^p3N Qˊ6@jEBkxKM(e _גXn<)qE9si!䴻ػ"K. fAAv@ őC4<tUНUdOGj_?f,`=Qz#E x2yL/譂(׺)$pBDo([Os{ y,F1y& PR<ܳYeb0$edN (%e`-6],URR+Zub3I2Hl.mrA^b1xIw;?~)I~i, HCb28ŘKI8S$1 ICD@GK2GM$1bq@C4Q7wScdҭiAܵ2HPSմp4vr#tq?g!p\ F6c' 5>OghC =Ϻž 2uPz͈ p/1EuHB`✑ BA7Ģ6 :Ocû! s&NP=`ziY>tpscggfvqA.qc.1o,|B1q4{^9m>OT nY?~0U;9hq9Vzgv©('j:%&jj'ϵb[-6Sl/?+gN<;b+ (v%֌a b.6~<.2zØ( UGRA&g(/J*i>O35Uѧ(?.ԁ^Pz xrۅH2ӘјHFYƹ> @aBPBNQnzZ 6XL[ kd32!R5$LT3* JS5x,b` Fe(T; i.6D_rpx6v.yɪ# l1D\O;Ey0 M~ڎQ,8sC%ZElud#Yo[&sZC %lAV.FraE9 GndD|sC&5dU{ -&GZ^iރ2C+PS d "!_W\a5H]3; Y0zH<,3]l;%`"rSLHbD!@H1n$ZĒzJ`s%Q@]n4FgpvzG7lhc@q_'!Z狭`-|iY.xDtkm^1ELYE WRDA4jŲ{BL6Yzm~*GZ}ii/>7#@ֻp- |(t0)!@(0(,ރ8g<̮BbJO8b؞ma˶rĩ3[3lbh NδxkRN[m%H3豳gS*>?$#3(en 0K0h'tvdDjEDaڂsKn5[30(ymMr#zK K:W"l[3iua)hLx@FDm2ڳZ3{d""ҌƁ$_H EQF@L,mVlk0vњ9VzK\'==#%l9vl|I$:2?>d5pQ%yzjx:ydFoI^c_zvzS$;*D4":Nk.bn+6d]+sgvt@PZC-)uT郤] Urƕ 6ڇ?À}s~AV`90NNwϪ$KN^۫d>*d,3w؅@E4Bl`q륙1^ E#Ja;R˕gCC% O.n'礲I=f^VTG. #lt ^ocI#!v*ܚBRѼTEe^Y*ZKZ,T\X)5ĥ#ЃLʢT˜I(P@hDI$4b"$5 d\Tj_u4H Cp7Bvn%7A8iy/FotB=Vu&iӐM<^eR?FӿQ@n!ܗjX# aTVۣyL⪳rh,Y%{)A_`GjOUQrfڙ%1( 'ԠKdS kq^ $f@ifS%(d^O!" s՚>ljx78Z!8Qg1f9yZCp8x^|r?h:s%B/Gt-= /h]-t]-$]׏ާtο s-J~j~?_|U_gi`Vkua^$N\g(O/_y?^?ifՋ_w˯ͦ/uןoZާ,ާOX±#re+;$»|-\kD(-VZ`Vo;n†XMGp8,+ , o}~9^yնoPmDI~OWfi%-}Hgdq)>I _念a5S}Z|Qvo>* h8*KmחIꮯ~~n͆J7i[-O? ?ݼG_y/G^[vVA/xpLNUsrۗtV쟋_JUxVy!>zv_>/ Cltwڀ?{ܶNtC{ݙnxtMǃg,)4,ѲPGi 9wVgrNGuݘ8/6˙ gى-Eߙ> 9:'}+1L1 fgF.~&aU翌y:vC䜛BWK @~{,d/#eV%Fl#g~ Xggge9/(?|:ŗ''l))%XQ52^E]˖hLJt \;HZ.w];i֪q+xzvL24|I; zYtCEbFy.١W`2B^-{~|r]w`(#/t7])xuA̡,T+2TL0  ,,oAYT$OkVqNV8C|,,/<&G i#e!-c,/rYU-pp#+s4;{)c{ ajxwT_'MfVh#=c=8h$cN8ƽaRcp#Ƚ9NŨ8RY##ImeNS ~EҬۦgK#8cС9+WFuf"*-߸0B+TӺ {s2*HC&HWpYagsy҂'c).O3T _Qa §l8- `$ /@µt쉃Ad|''A9p px9_'ɋS7ëX%0|4$n\tUČ>.|h^ن?6>mXW@>&F+8I5țeP ygP9B^zh5GxaR o? I:L48:qUQpL'0mq7{1,׼_WRxΓzyϗA㬳t/9UٯKiV[LkL h̹eB] J}"\݌$:GeUP޹Don]5hG͸Rܦ{itͅ ;{psm}! c#-` \HaIP4*^SK:#0^1Qﲼ^12z:t!L1rܛcZ'$'QiTr≐xC~Ȯ** һ-˽2 wZDb#hqނֲ $",QDT3g nF"WX\4{Fz qDt !LTismY#h!:"AGjL3++n̼>L#Qoف;dbm ɥY%7N];#D7gNѝ>[׍[u_o]7t4%W,y>i>{kN +ם5Lvrfdnj3Ef.LKy\jHi?t ǣ Yk!gtNkA&Yݝ ^#%͜׽|zyuJ`A»p`85 g"`IP.b>wՄkv(x:-?ڻvi۪T9Pm$1&n<ݒAZJmt$șH,2FQt[A{LӦUŲ&E@K@Ƌ#X(\`xB T+"*^ 'J#νup:SaF(Pg_d0уs%QLȑWNiPXCDP0cF2}VJyԄ\LRN0sƐłH>ۂS="L^3݇w5mBk'V>T1pv4¾f3Zť뗯Ο< Q# ȳ JRדY> :Iq>M$ٯov`O3u9V;x&m3KAok%La!םo&ӟf35.G8CR,GBE0v _N(+ت`6Ҫ/R `j֫]Yd A1wإbV(# ue6ћ K**u !XdfMET\V71YE5ԍɘVXЌTeLumBa`Cԍ.CH<C2BD03Cxa% scib IF.ak!ġ8n^&V{9{i έO10j 3fuL#(3D8kLTxT&"p\q%:bdI0`DS b85N=4@L&z282%r̅1 A |ʋ)Τ b -)=`T4_k#c3ĘLl ULh)пp]f*->THwBnư #Hih,(1CvPЍsM'S/[sLCi>qmc2L3{0M )8j451s1iMJ."&LAx\SJձBpzT{vPFpr*q(mp-ݟ +rQ'Yhꩦ7\4Ke'CPM >eFc#SAx5B!B*a }ǟ~}ak#f=.u&LN`]-֣@nڊ-F z0 ـ3ǎ>^] nd"4 I5VGy0Pm$hR./2)Ш3h.IIR!0:4 r3w+usr[0STY"RPYTbL!Ē)Fa!$rih=N^QF;Ey:;=)eq[tdiTlBcr>z}َW*L-V-\w+w !U>3DMMTTr^įˑ_- 7;gs ;"71J ֦utr9 yIAqF{A0Lnf=}=KZO!w_=.߁?;W-5+h'oo~렓A~';~Ͼ(z|P@WG[Ou~Wxe+=LO&kԴs()ϼ%STHdK `6J]TѶڨnxʂ=ҭLp{'_T"Ӟ R0ï0[|Φx&wX_!Wюhf<߄Y3Fm<{;|u^j7= BHNwJd"A0hʂH0I+'T|cRҒg.͚8"T7ΣUvBL}UG ,x\_X+1-s"oVGéJݘKBUHUa* N(QTtzW|G~saJYnAEJhe֑p'ci㼯f)rq*rE]ȹ2$i8UpA(ȇ3f8ɈS~HmG߾除tփNU2;w4/4t^UjZB餰r15|餰'ǚ}vl?.9sֿXd֬"|~]_I`FYϢ, m4&dtO*Ne~'2{Nb?T@2R< BD) >2|鹀DAikIz~k&&i4ǁp7z=x?k48i8 {M|7ߟ9 I^ JwI)KnYJ1$ҩކϣeuw((v2٫A9x?_>FߎO9L\ܮ?h=/e6_flWe ׃G >غ-lx:^L~GL}Ӂu/!/ (FzlBDߡȧ<,z_E˼}/VBZ D3$ōQeD \YjL@R:BhG}J)mo7A=%j]q,x#nj/֍*z%GJk˼VI%;,==W{{p }LHBq}VF͉Q&@O'T3d<]\XNڑjCN>C ЁOͣPפC~1Ϛm1e_!1[ j;`n[:tWoJj!6ׄnBLx(ʩ>DdP@6TPEQUɚPm%o=ޞeMvGJ^X 2?W'?rW1'pj v=J,ԏ:عC,"G=zsIh{ڙjxq9nVuU:x;~}kq㉙To L<\l'F֯r,]\jqVowe!>%#t,omY+Y_Ji4ᔚ=s~ޗf(L;`4KQ(BxoژdB19%ZEy1܄@T1tuVkf+_U-Zm6m0E)*%+PUN%Vi${CNj řhc8) ! P+ˉ;&6nEPNtZiysAs[>D,`i0*#1 btILtnzfe6IH\vx $~:⽯o\0L# o.0]M5ﯮFo.𧴚zww/4wnp?Y:P(7/=~GʈPqL/ȿ+tRݘB=[[Ievf_f hk#I~,SJHPTT*eh|nݑAf&JY J8 92E(w1Q WC8r#p`Đ 1"$5* bҀ/)@ a<c3(_cYGR1.O{_8 KШ䚅7l:bK^y*ɥ`^R 1Biʌ tuv$Ild0ngbge[𒲻J춭D&&按 ƕ8¸[Dw̜fTLKgJB`l >K.]+;߶mfD`W<];_J”|>ȬT 4 _™ɪ2,h/_m, q-\<";'MV0"Q<$||!?&7ha =Bhƈ4U e_~;f$l9R{U6*CQ'&j^ʞ8(S*I̝ Q=l!Sg#L՚7.ƳQxs}]{w1Ǭk0:iNX ZQ5d8\_uKEJ#ONDkDUR$:A0!qa vY^x > ]^~{cд Ԩ6GnN^C%m?QYټyT0($_#owH}ݭ -OAbkO!bLo84˷:j 7[8i}<Hў/+iOln`Y J|7}at#e+r>7R$=>(Y 0S9ƳmWWԼp ve.9Nbi>ӧBAodOpl|f .u p{(>7G9> GwPD'CE= Z䠦]{+jξ r-<-,osvӺD]v{௓ɦeb Ow3;P~/pO`KEoFkR*7kÍY򄤒w T[$5c<受Ҵq,-=Ia; np~bM8?&- O[`$ QQ,Q<&QIDL,jm vv6.=~?7v33y)h8_ƙ!juW@Cd俒z R8"c {v\J嶨q:tJ@KlCNd0yL2yc~eat{{ ǡ p8 IC#^pH%9Fkʹ!t)0 F!HJ W&)sՠ+ǀBo=<[Cj[N% $<[?Go|&1!,R,ቄ֥b $0$4TPi"(vfj Zy)QP@&I io$$!ǜ2HD<&K_W4/;Lfվ^tL }l {AˌvhO_:> rh/̬xi^jfWكOd_Y MQν VxtȄBή5fo4%S+現ҿ9+W6 a M<;`me_;Q6|Qu._s*=nfEަp!&#.f$/7S1gFq:c8$*b'ap,DCeDlxXF\X$:I}Jׯ`UBA1%%yqA(R%eG~it(0c%Շptm,i'3q2uu2emףSbr{( ˸h+ mR6ְ[?M8ğMﳯmzuﮮx]/kӉeWGw^`A $!s ^7j珞'!AѬ:=34MGpNn>9l)L$C X45$EdLH$PHu'́̑럗9X~gHG/W3M9iT]ϻ>Shryjrҳ&N,᪳>-47*-] *7QUu}v( 3 9|)f& q.ђ.8~/ÏN@L I}c22rԕr}q}ЍqF['Xp//hIr%B;r|y^XL@eG4,2WѺjD(5".04D`$[ʣam1tbDAfR j H4}pb/Z32b틉9+5=6x#އ8+1@T6L,Bf\g bTŸĐRfJVZ#ѕ~K͋AT2cnLhꛉ>v`HOM~L^j SY(BrF` J4>\B;psskTZ' DvjAݖ?E\pm?I_3rٍ a)o50Ǘn! 0͟&^&CE59 L6.6=|?ݍ&%=~/pOa cD}u /3>V}m'O[vJ)cOݶ`Q1,3K9h$a9MSX5 R6k[M"ʻD )_VOЭn' {穉@S)mnbsJJ:tG۪PKo*1j'YQ'e,>& 4ksWLxsHWP O$Fh3wF8]U}nT_`R p-+bVё޲"l4Ȉ[ч16l]$_oƭ=q[<|pĈ'LX"H1R 6( #!1`baT iS ͲZ:ʪJmfSvbEĞ`& ms7F_F0 GIƏĂ#Q(p\VA^)KкȭW ]Q@J5AsЩ& Z?r! nu&x&$yTT.sPŗmfp6 (/c,5$ZcDK&\wV1x`4,lT&@!e(5Dǰ a-C]Zy8ԭDth=FUIa`DrBo)GI?cL9 1Ҟ3"+-n C'4EtR=:_Z9'|Qy$}،6uF8Htk U̦Z :R YPksxH?F SD 4'&ULO6^W[X\ٹ NJ6lo9qSn-ɵUB5?͊B.0JT.Jӭ$ c8h0:0V(AIP4\=73R9N̬o81Lx q8XQk>y0EAxjSx@Wկ/sp`o1΂ݳD+]<Ut% uy1WJ7nO!X B|\Kbj:ؓ;\+Xvmԍ”?>FABC#Ā 8 (0)P(`LEV1ڳ$ꄂtibf1dGJ/e͉6Η"N:dfsqFq $ #T!u9C>ˡ#hӲsj`AsRHW|0֡^h̦ 2\֔6E˜I8C[yv^+?Mh~?M}M"i?.sH4 ﮮx]/~SBeu:/ \۷o<3SiҜC@}'%R-4g 4iɖꗊ `LR9xuX&?nR)n {fzݺ17e= 0l9`ZPyh ͺ+VEaJ$ifphK$eᠤWźe((QFv*Ѧ G};#N[SCs9o𧊞N_w}Bp{R(P A,9&L1ǭmlsh5+>Y#@GgqÉ3!ʖbbA L rV!%c À"fk~i+2TLwimǚ\1D~H ';ͦC*0\m1FY((ɍI zg}t옘aٞ;JL_K+7QElRD; 2L$2͔vM[nKyA)=䠽.[`Ƥv*_X}X@Η@eBo_NUh9SǦ5q]nLj^4ݰXۭp1vY72[(|TM}[G]ת4U'EY\Ar,),竈ڭUp2q!rP`) ԓ XR‚3NLFuwUdb>|Bᠸ(*PR" XS@a L֑ؖJQӤ^^bXvIsHxm8~&w,;`Ь&D>hN{p ,H*R-a@͍r8~Z@Yr5$}g›\k3sbfY33W֮dGEAID=.1'i `i2 ¹wg,fo4~Yo%ehDyMHxNY pBL/;Pd]y3 ]_ OM(=N @),>ŧp|  JqA&=PbU""l,jT6&&F%܁By3I>i8#a8qғ6_[tAw7'?.]ǰ.m_:|OrR gE'd9b,m 8F)˻_?[^%|ds\8fўy T  / f÷p&~k\$B3Ӯb~\{FOA<ɈN^;/nyK:eă}lK7NpgZ RbCp9M&ɃV^2ں—ssmp bvG|rD&uJƆQWR`a3Q 3Km>ŝ0V#Ibс[V5KsP2ԣ:]V9hN+aۘStk{>b 5ߧElKI# vХdR ɣIE5H/J`@L@\)$e]5wpKe_# \p*:,7z:cId0id֔ΖZK $ rL()5>8="8`J>*96 8CB:%THZK-:U\b8 1SE3QJm)B%W)/܅u I 9Yܹ:?>/]} K?to_<(?dcr^׻1z>!3e߆e(xA)f>huNu??0359opTGUr.gf=: EOo΢Y<*qt#I~-t{: hMe/4U!,ڝ8:6xƗ6ӔXSz7_馩s0YPHDA Yqkp(bZ5p<ԩҌ3,Hdqsd)M{TҔ)M_`AS~.V̒-&vv$ņ*DJpj0{)SbLRY@8Ov?XۍlMjza$xnB)j\F όtJzK`;nFb$W` u`zx)^ҢQC(^ RÊ~xI7=gbozXJ7D9ȴH{ᜤ5- &e 'QZÚx5sqgaM9{ 浉ZA)uOC4q.#eH2E/(acHApF(I D7r5Ts,ߧgHBg 9dħXiI>ߟA_ېP(5b 9sOztbF_{) ct49h_ߥW#cl`J 9~Ɵ! I||vh{Q՛!! Nm}%5>cqI1OA~@%+(uщyx+(sEIkTEd0Q O*`(}V{/:>)N) ^|*7KY>x*e.kq?sBBqyv86٫YgWQ!H9j=wPg&?(jM%?ٟƵW>_9?%B\g/w¯6n/!!Jg/w8rztbF_r:E`iiq9 i3ǐ z^tYsݗ1Z"Y3B|h}q/!]LO&ShhisI.9x- = ӚgxW+%g:o-cyJ ӚgB.4I7CG8g.WzĂ>Dk4-`{m$5>cK9:r'K5xWxJy1=2>5τx9N:Bt@'qϑa'!M{߻>p+1C㑰iM´k&5>o25I&a 5 7 5EO R!·Pn!!)!l #vL'֭GWOc].5r7':qR=9nj*eef*#bLׯ36|hB<~vmnh[D\%p4|k'j&-sGWɵY 4X UNrQ2{v>r[]-\DSɗD>IԲp$t#_m0\[WN 4(N'V b G{ά8BuˊOD |uL\]qP;nu[{ɋfjn_~y&֋q %[Ց0w$Ke6 n@jq)Bv/ؒR)h(w"vX*ZJe^s-`t\"Kխ/QQ*{բl7&7 S1@BVfGD rOr1FM5fʌCSQê-]?,E1!mia-QXMT<&"[]Cn>>^Qo.kՓGcpj1밸"-WADLnmcY+hOOMy⦎[''9EYREI$%zXK!:V$E.gowvvwvFy}޹=hF`*^ >Cwݔ\-^hy5zEc΂tؔeTT\|J9'XCIF6Pߑ6'$Z2s"|L<,r@Z` CEc}=`@U3wq:G8HPˇfpyx&.<}O= T< W\rineBP<ú[ !Lbd ǦNgMǦ-D@5l~`,q2׋QQdRgԝbA<*UC@mp)7EMp_xvtqbL ?wpFGo wMTh7T*$,p]uL0'`p1ud4=?WWWs=Fg&qދ;ߚYQK|{ sipvq9{(3Br" qF0<;/HLM/}.vo>yTFht?2a DTt.|<'F6bFyS n,w,]oҵ+S9.#߻Ҙ+[~q#>YŒ.+PB~N3O rb&cw᰻DkU]_kLɺܼVu3n~ =Ly7ǥ(q6cFֶc}kYSeA )lk x>xmdݹcg<8Pm,x۱t䯉0=MMKa N路Riй fp.fRva9inſ*]D/gC 3RTamaЕ`U8Kζ.Y}E0b:=i  ?a$3ԫj1.d#ɓ_H-ƅ'jZYsk{z`%d$w p9*@|V뢼`B(-8f/2֞F),0ZCv_Xme ЖK6\2CXݾBRpM٣p$^B06)P+Dk:ex;|v5.[ -/W,Va%WamJ+^bwrꋢbx,Kn1 vB!6"-s{5blYL3@f67oi*BU -b  U?VPSRؤW7ה~ k\mVQ`ŝ- ݍ^,`Tc-X݌7rƉ\#C,fD!D|f#s;'lyy_tgl z9tsxH*^ڼǴ60QXƤEZ [E#kµ5E3{Vb}QՋ*,E](kA7#v"r y'g/@==پ Yݼi_(*U.6v|`md+E{P˘a̋]`֞C.hcohgmňں^c|=vDpc?Yske U."Rb2" 5G^y/)'-Vi,m]YB@`*dldg @:T7qWMXɵ>)p^N 3(c[ltž΄`de_:uzƑl: ǣgӞ;lZEwvtNWNw^y욮8m0A䂨+ X㊡HDDc>U$:x pNW6NGkz~wI?;2P8S<;ZB>N in[kWrHT迆Dڤ'c!sF/KVȳ;4wܗW/.gQ,\ #|u"D8)!{'"AJO|ONǓL3ͺgtפ;ӝ۳19EX#vOߍ'?fNAc \B ry1cB&W2[)/n\]*91~z_PRmNvg;mU*.SVʖ4;?"^, -'4Ν` W..7LF+=,G}vStX)mxMӬ:@m|*}ǽ-rxM+?hn($%Ewy.(gkXPԷ%Q)]Vtvc*J~LsD5y_Q'oP1Bfrbo,1y'ZOzߖeY-R G):Lta(,m ,l±K6zIVJ9Hh<Ѻu) #qʬfKx,dISS_*Уj}_{OVDmxWCF,Ea+wXpjK֧ii PmY@f-6|6-5>>y0ӮO:. >(R}g } 9(. EZ@6dk}£6p@sw?9f,¼[+ֶqe)A1lqI'C6%lķZ|dʄpGKns{眤.&lefIrl)9tui6R@T0ۢ>6BL!=RF!#lx܆fشyZch(D < ŗ+Z- SBxSۇNT&v=fGN`fWҁݸ0غ慦* Ōc2{^|.,^HkpOÈOEx aJj 98"B Y 8r9 Ÿ>^$/A!† V8m[(Lra.c i e"Gu#GsO<׋IOc {:(flJ\f<c u[~nnKDue;^}iw$]k@07;$I6J̈́ i$t2…;K.F enYaaP%SM¶q6Q~U^1nylA9뜛Û&_>|c0VӁ241'(-诩9dàNw\0',Nw !s]Ckb6PqC"=yE>#!sR851(m?F4fÛ{$ \ %QAE's͠x?ħ8~dOC'~noh {x7\뎿d%Hŋg?\^~$d< ӏ?BK=IK_?]'__{߯Vzj>]M{A&^b#%WޭzryO:B%!KsRYްl ](AsEr!vzOo,kǯ^rGvx8roQ4{ŧt8݅Hf-?2d_d-jg6@ٗp0>oa Gz'p+ON3܉Q2 oO&xQJK5X rdx7>YNǛ4ټ hF.H ť(?p|$on͛ w'oƗ0Uf|9ϧ?GA0MTI l~t˹;r^7遶&˳P?~ϯAm^{wXo Eo0t92ig@ߌ0 W0e>wB $sG?Ǔ0U[.b ȓ~Y[o ?yqGpd _4W<\0_A8J|6=ۏ%׿g``HT|2Oi^PkN4d=ϸ/I|ӊ{Jȍq` }IE29kuI#ClF~$'N4|nFMjӔ-xgdǩ Ef:ͱl0=ǝCo4o5N=J-3{y{tQ$K ۷G@gS ;ϙg# H-$4ʴƬJ ҶVBt~xJٞk j#FV+dcI;Be>,gi"3휤,V77?g+\2M5`})rP lUR O<}8_SZ)ɂYܚj:P1]ߪŠNM8Ũ(5QF5hK$*OMԨ^,5QApkktdڊ 5emeݫV j>kcZh?YwYK*BwBG~vE(7בt4|ڪ"yk+ǹE)FDyrKyPU\)Grȓ>0/u"$ZVoC~XT#6;?@.}FQg<[߾}gS#A# 3Br'L>p9GBĚs'RSDsjs&sK3؄ސ|74B߼ v G 8rʠ驂 2ʈX(ѮЌ1$@4"͍~ٻHn~ dXE ܓ!!垂?{zWχG#z Ҵտ*>~Mb[}h6mbyn\^WQSɇO7?Ye~Vne~VAb+CNEǢ~045DQQ;K&6|j^LTms$ۤVmL*O&y-rY2Xtɋ,FQ㾆hj֭hj2@H:*Vduƈ$o=fMLċU=y%ąVu"?ͧNv_S|6VLId8y{JWtJ8[Tԝ $"Fըq1BursӲib]_5<N蠘9h{lϞZL a!O\?ֺ)l9ognR&=wӤt69hȊ&-F5RE6D kr2 ZRYvKZF`D)e(hꒌR;RCO9s5D3B40@sv 9 ڊVIA9Dkc4AZl%pq*scF<johkih|c)@>^ zqxet 2Le*mQ0SvK;;|t^}tf&c%Ŭ'MaA1fL@'8p<ѫ~]e}fo5)%'9?|ǐgVl< {x򃼮OdCb@::G~ ?BH}zvJ^+w@b\j-7/ԺgL3G~:;b8)/+L&BLeU9ָz|=ȄYcHegcͳfd-3' c%wK|9Yð̟+> n)8]lڳ3Աwiz;4fEFpB\9~8&`B|8G!OZA!dz| 8c9ޘ<9q'ٙ9q64N8f-~(lQ3#~>8AƋ>2/V)2L̯k߽8ZN'UTQ\R, \o) -fUovw훺Ф}S?X%I1~)ۣLqP*}=ԢYuam,*e-x L[z4JCxWQI~"@&:r@?, CZ\ɺITsl|o~ۢ$P}8V}>!fGշ1|o;>r3na1ßdς}z3g1ɾ 7S/^o":>4_TXY9 sUtPehg{Ųo{ TK2;UK1A^t`*,ؗ ê|l۳gɧ48Oq]q'"r O?~׵, xla02n)Nn#1ðiL9gܟ][nRɕ8ʧ8`Y xOVF:Xp4wNvOYbz6ʢkvAhBW[)ɥvրm= >; ߈ a:8g/}hܑt̲^p>|%vuy̝G!_w~_70r`iO#``2|:[`\0:$^:?Ɵ'|i .>jg)WZꜼsL;z8VPt`'^6u `,1a@Wmg1z&M;)FРK'*p8*FD=SsNXV&6Nְ( Iͪ$- iVŃγ75h8h0Cn/S#">RsLx E]DGTǖ5gEM!*:jHffBۙ 'uEEbjCp,6r|ELha1+IT~X rO*{ΥVgVf39a -;]ٳIGG;mVKsީwLMOjuכlMdkDǓ̥DA)=j|woqqSZ)'D(+/~G6EsT(_^"slE&6B#"kkV%Vi\}3`J ;2y˓oeට]%/SKq1@GzUxc{7?PqVB_$ibqVުJZY1{YI~-w & DgN!xٗ6T7p>;=[ ^:lus!HN}qkGggj6r·^2fϛcjHMEfRPo tgWj&\:Oq;tji)Gv0@fg@]^j4fk][][zXw=UQT-scI̎*dwէN'b:f.4iiֲ x#HN//5Vn5V֨Eؠ W=8P eԺ4riDP 3wԅ&aVH/ߴ:)Ked+~5igѫS5=6-VXXab0V걔<['dAbj]$y`U]bJzlB``c*^=ਧk⥘{I0ĩh.1#F$59#k磵XzjyBvh P})@R(+j4YGVJӬ5`N O(U Mq~ Gqil܃[HIh]u,17/Z1&YT@`1TqS9Yq}J6TtXe6բ ]*+,H"6B8rM Ⱥp='W8f*ѧ΃򾪒ejVŚPQ{]"\r(5O8khƍ2ʞj=PIvgF{.$2ysgrЬ&Ӡ7.$z0 JFkCzZ g >{$ 18&qf$΀jf3(D{IӐ#E'ԣ)i 7!y5օdbn JW+x7@PA2*G&!iglz'~[ܦqSp[Xdnw.ݏ%ݲiw8w((]zL ./7on?Wvq{?$WNa{iΡpqGgn2oQ׌s>wJ.:/>\]\kmYE.@yLs9/gajk#ˊ$vp%Y$R&) 0$4Y}UU`W1|CuG4Dr0CpGr w34wvh\@d5ٟJ$ng\?{l`T 8Ϥ9nfbylX.uh?mr.$Dnj{4?N#,MGW{piq+F I@5CZ CƲ~;;4,b:1zu? Iٞ1nr1\mXHrI'a-n rypԿ̦=`I'cAՕRoD4"$fYa- #!b!% X!bJe"SNȶtonLؓF]Dij։4f"6FHcEF6Z{6˫j;=1ղ= kى]Php)$QXkMYM FwB{fq,U^?3ftt_`P"AkttfO`[yg3ӭ[AbϨKZuy;8J N!k8㼂#.yv+$ʽi5/8_jG|BR(>$F(CH-#NgYDO{)1#!Š;(#)Hu"!jɈEZ,Ӓ !YVq;f#< w:f0r6p*e8TB/j!ѭʯB!ńݍ' t[?|Ov3Z&_W@bՒwG?[2]|O袔"y8gVbrm:孯}5j.G|f\=<=CZ*pa"Bvi\Ȅϭ7WdɃ+0vxx9I0&HՆs0X+VITk3!%˯Ճd>/}3B~V*k-̝FW|Ӑ .Yc,"]Yv@ym@η1۷vV8GkY~7~3s7zظJ W~ F9Uj:q9iU8pH3[U] @"Y۞$hga3IB 9iI,"s!"D"G% 6PXFpB#}znvu/u^Փ̳RQ3 1Kx"gIJTPgY"Rs 3ϲ_onĆbb58d̢~G;U:.yPq$'D1zz_ȱuS^fq|<8Js|o8|._1F7/RJbK´3XA7~wF9-!?wlP ^V{2`x)qDn'[a-jW'3y4dQʆf [։,k$Rᰨ;Fv8Y?svo-#a sbvOp;-DK! #Y~Ea nEѸ[ i;(\̳,GN+-\ri SbFQi822Q4JL!Eq4P!G+ƣk48:1]r&+&3s&Ⴭ'+deb9:;$㡌GuQVh<r:cXkpu Tu) cK!4.lWjjwEJ;UA~Kn>S1R)}K|\,4k$Zxr[c%fIoBETQ2{+^~ZśGi`'bl21H!+d{-Ts^b4Lx31*k&2 FX$- \T"-yx<eNP9[[^IF2<@W% $iP e%`f( C~aۮSȸ5*㶝 8'H:L<ʈoW=4Y|"HqBe,<eA%Y9-+QU:@gJIroHtHaMfG{%=#X(A;&&-o6!̷b=,e. ~yC&֏1ZU({H"fᓚ[F^^}ywL`88[^[F maa~*-ͥ"o& M>7pifI)тR,0) R*%=Jؽm ^YȡRRSpy$8]uDPa.x#B**5afUF[AT;R< b5#F#RbjԌ`;Y3"4dj -2D C nK% KL3Ju$φ%YGChy=!?NADPTG$26<Zax"ܞ~J89GbŴ{Cٔ'!|k:6KIJk!_\dj'SSsV^3yeBj9dbUT!bg6kPv]tRZf j.I!CSr`1ө|kygr5^*&ljL+I%4"D1^:I!!Ŗqm^[nwm^K" ̒!Y\yNTm7Ne[%́i?OoB/U#e!\2 Z, {tm4PrSrSS-kBjBdR@]ipgNh}JX,!i^0 TY$P"b%OJkZq+-ZESgWWd]6Mz_]#uf'VYou7$FMx[F"mse!6Cj8 f^Pڒ3Lޥ5G݆ Zq`QAH"H&R3X& +hON8~ȥ~K(Y(8PD gyxB(2J)EO*@-nc`NJQw@(=kIhڭ0q~e??FM?v#`mmjr1{lЊoZ 'W4ZG #Z̄R;W]@k]&涞Ii).:W,ВSCÂhhW{0JiKui&tVoݦ>i}l#o3a:S \849/bs~ܰ`d7}_éL_s횥yUa_8rY`aH.9Â&-Ls\P|HCH u uk鯦ߐR@QԒ* 9Q`y*\JbץC}O+ߡ}agi2>[`9GM?ni._#l(]} _oi8iXwJ՟hکڶ>ovm۶۶o&W!OHc 5$B Bx$B)cI/}Г †e}yL:"i'gդ.Cg8 "TU]Bg"  a[Cķ_!/5& ̨Ro)?YTqBםaj<p=4#˴7 S\ݝetSu~Tǎxm#+ `Jz -HC+6|zl;Uec_rNuc_>U; A+Lσx]b+IsRݡ,wNւҝi%h/~h} 1]|mp ^7g!w.z?Ftl"a8ƬA}oW3:zw}"62sYm\qX.*d \_ǷyP}O/!ev\>"4ްt3=s VXS0ӜSddjAV\=敮|AG{t F<&|{Yvu&'t?—ueA~Ȩ)Zj2E9F?-°'Y%Zn4j,ʲ"F02P@jAaQ !x*)w)~\`W1! Q9jQp 1rb0i#+ pv8ջΦ-YhOm (+"v-.G !R UJ&Tt@5IkaNK=k#!\-Kn6jFඁm 8dH9"iV3T 7x%nUg'tvHg'tv(AiZ)>߰]-^lQY).+AJO=LXd4*Tz>uSZeS9Kx5X2n9"%N8! с+ 謖k.8ͬl2x9G_sCBPAn2Z[kC)r$F1 Q@RJB=mq;\Ls?:c%X@!Yt޺ݍ*NyJD)nu +Sc )w7KR.yپy_Nak .a?NgȽ 7]$mbxo>)L{f2\ H),r'O,Ϲ3QjuaP.2)IU3[Vo*BuuXPk "/@ p^r1W=&mII=8!pZ? 2Ti!ٰSiWjD'MoK_l qjKsЏjI8_ߟB,>ЦVA)"iI\ᖊ\ȝ *,69"}v 瓍5v8-mtzERLx܂g}]t?ؑXr;RL,,UB4]%E?O kTc`򅒁O(X}p>ck\ 2kPBXBnEYY[y)4d3!z/ fi7ESFgl2#xZZ,F<}ʼE\V,YO +DNp0C ى.nxH".j1O`4s6Lm~ 3E ԉgS2ټmN\Yϋr'1?!/2/9ػ]pyu3*oa|qN3pRv;  w|mן% Kn \r197ϗT/K=5h`9, YUFIWT-A# N`Rm 삒>D`ifQp^c0}3Ձju -𯽑%OB"76%SuHI+B3l4u/o\/\btr}K`ZR8Q`$R&gB\' MW; w>07*kIr\Rś.`wQ#3A5tV27z*tnhkF0 4{t\HFV5EӨNxq9<$u+:+k:+gU_7tal( "x,:P$Y2/J ?ӡRTWgqcױE "HGΉS x%X\c,m߀UgϕꁐG#ΧXW+eOrZBRDqs0woFO77 `ҋ n{d]p'owGU:6q0]FOc֠x`d ?i{ld_,=ϲow1qeT1i9/Y+Zg?nzF`6muPyh mppѾqM`ҍ#Wz>ߔ$?*&(' ~"G=gx5tV=k}Lr\\!D.#˖H,mW":ݖHHG"Z {`g$q!V'\Z& Sx6/=yO1܌GORl][bW~/  jMA\ FzZZT4~ 0S5.P!X'yiarLja&N*xwֳ:2(uZ'KGV!R6rQІX*0[zbt aJX7Vp|=Rݾ`S-:;#Gt߾]f p5ntwף)լ4WT*.s*Ju6P8V|:`PWZ @3=:n?N~`Tߟ1j^ _g`>&ЖmFGg |#s5Ͻ IF2Hs CX .ږƀ O#JT֬mso)T "(p`2b 6`F(SeFׁbXcXiBaws D>2"8P. $*! > ZjN>߹L\p8:ŵ[' X$xCLT`4T@: {ڋ eg83y7Q#! [[9׺}ZZqV:Ӝ3-e쭰V0- HArԀOBIO9ĀSF3i걵ZGi,BQQ 8Dx xwcҟ)L&&JPOM2ݱ7Mdze;>Tti℠ n/+GJh1zk^AWioQH*h'c9Q pK*(hJ5l&`AbDW'׽5[<5 ΐ4K%?\>B"ewn6lW1QJ`RWo0@XNpEBcd"O#6Eˁ;tV{ c >F{# PUz> `%$䶞 걾ݐ>WĿj@GB{jh^XPE3ܭH@Sc)3eYGj-e?Q8 x6/ {Sw`y.0%9 s7}a0BM`@b$ H% @0rhqߛx Eq~Htg^OknFN4 =4ݛnRwߕ'h.Gӳ7 qg ӝ~Iׯn~uWo5q򌕟]\O?_Wo~yW]=&ݟ=~zk~GSsD__Glyu[m_ѽWtMB_ژ8 9Qpl7Z\g'hxv Ns^r=+2OM*ݽc5|G)kd[ O¢<|6{+|zo&؟L a'(\Ȑ}mQL8%JLωIw€g_NG 6תȫf,'S( ) S7't6O9+:i~\d30eF||j'!g7'pbld咊dwjf> cM>,L\p#B;H0!A7 E+:a0C|iLdu  ۰,(U>ndS֍"8qM 0\Y M&-33~pEp8hOIЂS~-wVinMڀ!6""A{"AIT. ?@"(&4kf@!+OՍ9E& ܂=#~_nޢk3qݎӯtX W:'߇~8i%t#*4 8ZH;yn(g,I>!1#Ua/:UBJ*B&BؘHO?P> ߙmt!믟uϦc.:KIN6/4N مt $[iɩYr>nj #xKɆl!oŇ`l\/o 4{ {ED4kGw^*9V5-Pㅧ#iQ̜MCɫYݳk!ϥ5pPO8 T( 7ZYdiu {dbM(;GFmĪAåcBgDŽqbmyϏeuuqt}W}'m%vJ< b6?8@ܗ[pҢWQjϛU,vԽi*rU+?:/ \Lm i:;4LHJو$aO{-ՇNh[Hu?U7*a]򎃆|[Ƿq8}+ R MN!"M[n?}U.y!CrE;:ڼIVzyY";\-uvRᲳWo:¼qS҇*G?Ğw]#?-9/Q8{tīo8dǫuՌܫQ|)q A}%,f`Y?eLYF&Y;C)$Юjc}6.=L/ 85Ac17]9mZ:@H!@Tj4T`T`m΅}+y|kԜ?6bj^3}F{ACᵃ7)iha䋡qIx:K>[mk(hz~چ!+k;ݤ"PQܶ-[}e,lqJ[ю~"(mW@+ۤi{NIueFqۆe&y;ɖa,[b /h5YXr4_.>yT$ĦazG4Z!yHq+ !JQBJρuǧ; J\){'\y=·A^Z|K޻4^Ian 3}HvEr.lq>q+6n{⽌[KJi}ڝ-[mB;;`[cs橆zޥ)Z.:;eMش=$cWhVB{NVtwSܪ.vp0z^OLw Nqó͝Ve,lp SLZ00z^;p$E'>5 q-fYw[+* ])b&ƯD>̚)szjk t{A74=h٬mA C !!AZRO e=MrhH \E\bI5Qv*16dUCYDez'y?BDh pCHr(Z6!{ќ7f|2t$e"F T!b:0Ws指_u-&D2ߗ:pR+V!1hZ!I{A{yn]y1mkR> 趮NڒLI<3h Xȕ&Hs.5aEyƴ#՞0(b r "mVGiۄFLüPS0P1 tR  9R*Bc qu76,A<9YVVzә~K~}sr4#Kr#1Ҙ3. A7O"A*8 .=Ex o&R~‘`f=?&Rh$dTЍg?D(+^6Dss=aڽ. oyrLh/Cjgr2 j) @0Q!` $0ApcpIzvFӞנYۥۯn&64]8H C@wzwQ>TmUqx3Vc|ȓdv\O`t\3 ГR9ꃚ!-e3nM\ )sɪ[a_D6Ϫ۹&K']i& ) }_@Fɰ3bTԓr͘l-I.ǚ6LU\kU)1x?C-ud =- 8+RO0SDM$渓eoHF]Gl)rmlLM"L-טf[.v?Β?NmM2;AM1F1t^HN9X1B]M`. ]Id. X9d횀(BVLD5LSf ` s$j ܢN~t ֭-&!4,(,8Y#s+K:‡6\*/ҕ)!:`tٓ1} +MJ3C[Otg*TgOWVtO%ff.|DZΆcظMHS7@ Z3{L#X }|l܊WB"76ґQ%I{!(1g]e_{D+᯻ʐv#.:T߬iയT/e^4L1xeKp*֥N^o/Hm9D5 y[|gfiU-I&hv;,R?(N'Ts>@h+*3{oQ^}cFBp^z)eY^fA"[hwmcً8a-t8m9IQQαnAJ-ٔdy]dZ!9Q U'^ rFQ^Y/Œ afRF-ԆߪEP]\mK6=N[e ԧLj-UX/vcrERekX!m+:&0 !%*{Y9QJ댵$,%"^;$i2m.UR)<4499 &iˈU09QZa t :#Us;A$M G;*N.MGaf}@QOlH;oP6EnVռ 2NVnq\PJ j\Q%vs&ݩX2\[%t**\ZutjF`eVA9_&JZex+\[_e)ŸuRc)$snBAt=wEvt%R,nh\U{b^J\e;¾g5vfVنf'(<):~TrIgY`ikW$!\Dd0iٺ TߣuŠľc붿ŜSn%[rE4FxnJqaT4V.= cB&h;u*/Yj;&QØUEqV+FÖyʶIa"RHPEerkUA@P+oݩt%JS)nܕx!,L菱V֖w,ř&>(F@> / 3[J1=o!ult+~0V1?%f=U:J=eH3p/ 4\#DN}5AE|&L^_ GWϡ3ӆG섩V;L50KӤg y;/ftO2O8; ITq5BX$:Ѕpʵ$ǝ/*:1wu6u<>Aaut:s%iJ`Z,"aK76qlNiS̙y$Rح>H)ٜ0Vr;L6$X+VHxMQ[^5 {R,++)=Bu4.Z9V*'כv1wiMI+`;tp]l< Sx/KCd:sR_TVc(f| ]_qEǙtO^Gcpdt19{/_񿇣ި{j8@7W+u8oxeyj4 ݰ}Uw|ˌn􏻷LF5yѻM߽$ebri.CGng7nk}l 04(\?g,Lr֝q(\= 'k5)7^lο^ ^{w= Cw&$9nTHhT6f-S/PmrfTIX;)6d5RcN]}BSj<9_<:W\l)'pgF7`[͝lG_TpsϭskfGM~۷~OO\xMލvoQ`G3=~>1d?h`DWK& M~tJ|?}MOHz2u \aR:cO#?4']|J^=~GA/ <~19ACFE/0$d]o-{[IOm|&so5t橹2ׇxSI3PO^8퟿a;/z^$WחW+O@ W[ 7}9{ؙ~YWYs iuv 9.Rx'WWOFQ7֩>8?~xNiOOy}h5Y:8w)7_߄gK{>&L]֑i4%Vo6X?ΦW(sVz&_ݿD4|s;#}F\X_0HOQ(rhs|4ܻGi?$d&eS 1Ҁ-r+dbmL؁L~? P(Rl K)6,p BT)N9%HbC}902|EUR)ljS( ma,3Ga<*`.CQ1 BYE,!ńT#&*ᖥi2\D:f敦ibI*^}EE G~Z!CpK{.w$QUa-@h` ݉rT"hL5eILQyGڣ' (\_cK5( [Fym.t62(@^y$PЩjȷxLy@r#RvGHHHTia'HU c,ŒƼ_S1j%o W6qڈv JẒˠjܘlqNӶ-4ѷno*J9kbBcTxlm=G؄hLqKr)bh#0W\$>)nh>8X *QS̢7Dz=k)в7W+- MO+X(۶mYQ$ B\j.nU$ ڂ\î 6  NDpaU2. ὺmk߯mdN{شeN<\&XJAQ%uL~|VD@e >R[A*@*,\bg)!@bqP)z8PEbTT46zCK"UVSI b! sS&8Uzm1b 8/~FuCnexa[(0ktئiˈUD*sF'΁ ifNEE!B6I$@xvyZs/ qtb:%oUŕm,UψD"X %)Ȧ -RS,0f'Ƨh-WRQ,y`Jݻ Ҫ"<[Ю*oFJ;\*]l #ʁm#F5s`;Km*R,0rؔJ[NϛixҢEI'OFסьٽs0O=ٝjNخTG5vZ#J6,oMKWLǏn~ؙ,8T8}V|j ?Ël݈uM^4T<ߝVq]a8M><˔7cnxOrJh ?.!|~[P05ꆇ'C?n5SS({ Cz4!"cbnnƬ" ٘\O V˭FBFɔ&Vd{nwlcun%[rE4FdHp[e¹=ZX ʈN;nOjw1*iݪ{.Q2Us> 'aa4o/4bƇոu ]{pxMV)Ŕ[NۃV&`\vp(NLȃwm#SaN؃wU $)d{xMl:@յ2tBLg4돗f`.|& ~kA+.Pn8ۃ1؊ 㥿, 3%MztÌǽA lU *`A%KXb:wb-*RR#B uvJ}XeJ4TL)WiCqQl ^2Fk)~hD B $y^U/rT)-9Um`2te8Vl hm*ug_iezH_!rN3v_ ! @]<BOO]Ғu~j)p.gal.u*ޟ֘M2ux][.IvwAOt*MȺkh߲ůo.C[?7WYڮҢS򝺓kۤ԰rAt>4ܟJ˽DoI ,RS:ci}on'|-kCLS|}^L}[rm]"KJ>?K 8]Vp!;iY[V\nD*&~S=eCyyٲٱT"Nj $sT[-WiLWe0Hn3qLș\'Q$cȬ ZlVڮ%L6GsQlYcY>;xaRw_-žҐrJq<+ӆ>R# |xvgi) L}O>YBgCȲ z<9oEt6=}x0C3C>@ P(Kt [١t+7~ءJfLN(KmZjk Ղ>^Pr5*?5=E'Bgh$t'_6i~;2t/ZDg}FJ\#E6D'dۍfc%Ξ\W%g{$12g 8G*(@30WH6=D 6bҦ!a-1b]iW+˃50àXK0}f]mXFM]_fzx[їܕmqwËxG. cfh_i1[rVp%,e<^D jPܮ^(`+O*1_1(QL0֖FRc!i酥=т|1+r: W}HGW[tK0/3Dv GY׻H||˟'C6?奆pu,f?gRU[jޘ Cֹll!1\f f g?`o!Ϯiv',6L5%K"/DxߐHV5%dM3yżp+8%>ZXbb/%{ k8>J =8Ï(+:pLuhPM(-ܦevP qW~ȊȠtٶbp@-0^ٍC14%rt7g2^*h'أ^L*olJ;r [laFlWǰ.xIf* [+ aT0M"EkCe:1ތy_>kF^ǧ$Z"VvY}N}#}W `BA h,Ml'%1"K͙ZPږcZHW6[\o L%Rձh9o;"{nG`_K\)8|&\4wL:Q^ 88S;}AĪ/#b\@ nn^wto}ʤ'mwXڅD;0fJ<{#0-)[B! 9ćQRe  ?J^,) RՏxhK[Gv0/Hv$>|Ni3J._o+LHՖ5^|p0raz|%nTTvנDre/; FʖƖLj&N>fU7SK?`)U ?ZLEf"o~j~!-($S%~9KkЪ)<*iDkSh|/<%| H4c]ͰZ1nFrvOh13g0Ĝc&d)(06DH$f$[v]*!Z sc-ce!:01cvc' 'S{"S(U40[aMhP*G;K&NPR|5P{{,wz73~5}-]/_⢫{ced,*Jjt1\'ɭ9nD;~Lj?ε[o, K/{Di܃K(do.ُY 69p@mxZE$@3:d&@rß I4-#uY'ybZKGmτBTRtjYנ[mPWBK JRp]+|e8OwTR y_Ϧ,w\G  kY|t*: TICʊ^aiJ5H@Q@I.PX^yAWޓ—WXV e'_E=K%ka\ ^zvƋoF#2^5l4+9ıֱ!-XLW ֽiN$h!Y_;?x5͑dQ [>$}JpԴ]*PжvaLfnj"gLH!+օ< n:-ϖXe=&1*1{‘PT+ &U472cF 8ێRc1̺Vqq# O))<506-k *:ʘF'Q{!=ڄ5{ h9MyBl/iFHzqyD}uU6l+`A%B?^i:vQ8|}.ۮ$p4Rt4_]}O]TN I ^蕏Ew'|3t*w"*?eNrN;JAh[%NS}hl}ۣu=xI&nRtcXW7:-j /#Z2)Oe,CacÔm6*CC@+9^J_6m/_ֹ49{uw-I!lskNZ`%RH\Hs:{:gHlE1fYgr8'%!0ee"=JXkN- r%E@]RtI9`ax-έuR {"oQZ3[[FI~{궮30Cph@n#3 H*$'XLY#ҫ)jXKv:R%$@{։}M?1գ1̴h$2b'K fsR{9C t4d+BK450F yIr3ɐk&, #ga"i>͘i:,YXc* 8~?ѷY,NSIFO֤!xuv|sܛypߟ}vf>$Y2 _ItzoVfT.=P."]-cјP9HӾAa6:Gze[RB(L+62e~Y)hBK%&5XO:b(Àfxrd _ᇻI٪ݓwx?<|\d1''7,V@xe%/ߟ& a<)1M~ 4o..'*OV&[)-wb?suϨh+P`$OO.KPXd>qC+JW$7kCtir.H-뙆}r+4(7su +=(m;-FYsEډ˯Bcr4& *QG&M%TK>W'[xRt:)LAt}^L}X$>̗Q<Q~ٽoMO22Vuϧc A'b6  5ml7Dd$^뷏{dqH"&t:-þ^y " RqhkYG>56JۂC0( GWcͅ"jˣV~ Mwj$Q~BUh971Z2M(AAS u'\RQnV4Ulv9Yf!Y\=-D1Rv{.P ;@!܇`A\xw[N"ZI>\l%=y{RD#Տע$o{Ϯ\w$ y>W:L(pWV9XR1 j؜Fv@,( !83D(Eizv U~/,>XBf*Lׁf{t[Z6pٻ6ne!B~(p8My >Ȓ&F3ܵw,MDZqwoɆB cB3%[3ME15oXN v=11Bv*5#v壝J!ٶER?i)F`~D_).; I2mF*;\GXYJzmI0ؖ 0* (ZӲZcQyjS/uƩ:=ۂ7ܳ|a&tu@}31 Ŏ!!S2h7Kǣ "Xe*I2 1@mӛZ}Xm$4_r릅&5& n 阐Z!FsRK5PBA$Z,(PpŹypP#(d6orsC5 -g`c(si81 QEBD^`cZ4QA\VE zi 6z+gw~XoD\AWQr 2h SFxhe[Ѧ>^t\s&)FO$nl)a>bpC)+iRhxAKN`H/}:I;4Xozw sR C\P*s8QVX*׀!-f1f@$7dlRCaГ?e32˵'eH v oԼIၼZκ:B*x9Ji+ ¤2j13#h:%WQ#r2 l FB^VS8Xb$u@Hk2ML}#,iQ@C4AmCQ83^ DBpi|KLpiݠ#A]h_e.x:;D+PB`$`lũڃ Nڀ9c6B V|]oc{!DiN-:5zmFcsv1ʀt(<;Xgq{=J% r@ZR(\qA o8W^+b4L7 eTA(}$xQ"ϭ7 N #h#OZR/=[Bc*}\ h@rYU1U=*%&Wi%{侢Л 2M.%侊T \a`?)"s&b.HÕ>6M5kF9+IsϔKW1EQY!#@s!|%V +Є]Z Fl岁sHXJ@Ɇ>8HqE}TR ` #E3pD&$a>o$Ij*SqJv>^[L6hF6g]]&z}qb#` 3+55EW!亜E%e,Эh=8+y=`jeupDZa29dQcӱc; &xS+q)00_ޤԗ~d'qdvs>م7yp'ņgavvކ?A|n>b+ #!^_| ||0rVbǣA}.̗qMR$? y܅gCt/_럿a@=O/:_Β4EURA7}u4|=_wt'xxN Y͟"A\ޫob\* gg(+_y3eWC}WESNTHwG!(_cQ O(Ѥˤ;JS8>)Su~L=:Ѓ8n,JS ?y>ED"L Ju2Y?H?NcxkT xFDZCfk}Y{)X] (5:Ȣ|YOB$8,7+m<1Fmg!VkdL80 Lz䵤6FN.`C  R|CZ0F&(AuZd8A -d|t aP?~>7k$wUvٛIQ"K;qN\ؤ1Aj]qᩀ"g=hbF1% a#, 3XA~ ̂Нy|u,,AչD;ѤD6 n\2S| #5 f54g^Z {MǢwU)[$H:GQtX T ,cX%&IURx)RCAWinX= r=sb=5å?Zbrh[w>-<v g(]&G tKeiY a/cP_[  m^Se ͠"lwZ~fPQ;P-JxR" ` {A<<`btQ&X22e(%f.e~ LoRL^Exf?[׮MSDgJ}Dg-'i}lkRc*p&?ggWA%@8e}u˞[ܹ4+_Dj*_Sq~M F'KZs=IS(&wKuҕ5xS=⶝9y,˕uաP_x4+C\ !{ sܨ=cQ hDd"v UsI҉켼:>73HMof ifüfΑ6RZNJ`i8-RdQjЊgЉ2ʾ8w֘5F^mɃXEH4}n#6v (M^7˴۪aE髈z*Va Kߢ:eKnTya 6mo tǪ"㕦跐+qH;KI/04zƆy4rhf-[o$K%kbѢ!ȭf*̈ȈI\\W6_p,aUVbS,U9X REs'J̡s0H-85M,N^KÑԹbݹ8Z*(ϋ9-\t$q{?2hXaJXp$ǵCEϺ-Sz$C~)at*1"]ŷaimb}Z1(8N^o)> ox ݆YKp8Sd)@ۻ`?b1$>[s}T'5%E^vO& FBJ!*K.*#)y:ڲ!"zƖ%E**Xj0bFQw6%ئ5nr;Ec2db6:բI9lF>IyveoY}ұzcB z%?/^x%H._VrH{w+¯Hưx 8lHtܱiWXm$A N~ qZw.198{`לq=ϊNDtdc# sOgXly1H :TA*8tiYfJN;?Nf7}>%/Kvr,`_Psyn&ݢ_a)9%􍀿(9?|p$$GuDhXYOuJcN9'8 SQ 袈H{AEѨ =EG?H$+!dJ˨Zﰭ]z [Z qI0[Rl6*pWNg\/Op=_/_=W} pcIv&Kչp@9!F)j+.Ee4ID^Z&N*xwֳ߻ GP)At,[y!h0`NSSy""yEE6L`Ab#sbA:M>V ?Ӫf]-߉[l 4{JҘ`2/L$4|ҽ}u235u< riLCkm'AufLnY3ꗛ)8ɕJ'OQܾū6 Zɣ&vzCBes")&pK12r-'H ;q0K"H\X!I}YBȄ2{7!)B@H""EjX9X/Q@+EcP6*baXڸi$Xja fZpA(qGKl<)8)ĦS4U ڤ͂pߤ jIU[.r|w͚ U)W,|_@Rk A6-"0adH H03B8:(a4zC+k::S,QT;0) ( Ka # ;cL6P̸H DjOQn}Jn]&]/n.̵y<+0z۹L+"7O܃/|Sti4)j.+zA``wfH W EI{&ysfЃeICt^'׀4}#'7Ik),50]3BuDa F0_=(1Dv85ӣ۶]wQR) (Cd)%^yTBz:R{ISOW/SSE;>Dm0,s\kFc1X z >(WK̝ǤۃKCkHD`)6muhn8TWz ،m<_Ϣ3mT;}x͂1ȑO0ri+Sfc(WvvsA1Cdm+l8:Zِ,4o]/#0N)lo_?kX(7]AKL=YkPK2P;Ki@WI5Pfh)R$ iR]gP-K?iF} Q$乭@,h{2s(r/z39'#y:lvȀ-i{2}Qp^j:~|CNAV_j+5w~]$LQ"8VCi?J$Wyɾj3V{V˟l ŕeW齡A+gB1#J!~W ~R<ˏexeI<\lv},WeN;*z+ۄͅ6R"åA^j#5U|ۉU|Wf1ۀQM)a㚔=,rR)XOBᱮM d%-GDT|f]k2FoG~>懗Cs̛Q"H,xW1?UyA k6GlpQL=@6ܢ(^( r- Gv %.%.d7;wY,HkofƖWNf1z|XK3Ez8"VJ ›CRxH~jUWVbU1eetUp.Z <ՒIƚ ˤpmIl;QI>FchZ ij,ը t.cOeKLq*IB89D<ќ(mJE=gM;Nဉ`Ơ!΢0~8C0(&|hQذ=₏i(F" a3[Ä:F&hE&0Ǹ]mCH-8\{֌65c}Uc%E**򕍄WTϤW͠њPLs@*,(BZcBۀAÃ4F֕<z0Lknp?[՚M$DZd\1"|XVaҁ CTBB]5?J?ΐ H(@"` "-1L--(Cid0ӮqEj2B$jۤsRq $d5 9Sd ֖qԂ/km0T-t:j/5תܹNI! Ѕ yH [Rh9Zn]jkQyݾl+` $iZϢ l0>޺4M>R]k2Ov?t/. cXJ*߄WYSQQ 9^5\2[/\lY7T7`.R{a^rYJ' UNIGQ?[ãXݺb:]du;ҫ[Ƨ[hkbXz׳:': &i,%=6;gv ľdf%vcDTσ8k9we5;1xJy:5qDUFz)0LeI0smǙ/L>wsrՇx3g. A|6oAZ7lFs!lvq$nxN <{{Ƶ1=4ub`ՑÍfQ#t"T#b5s/ [,tTuM]ӚYVؚrZt^=<鵹f]'a2ˉ(R>? HEfW0XǨuƙ1AksS*W DICn^dƄ;xEdG\IXSŒY N{ Qei;m{r%|E줄ңBTP$,l(Hj#AQC2)D ">DNhQ.&>Kv "Z-r"S7y\r &nIzQZ2F 9NfaPkR)*ŋ82V2c5 \'ż6b$ddzӵn/r>]̧ $}|3̍oh& ;f 0R\=~{yowO}}>NBȳ wod:+w`Ɏ ?gZO=V]_I %тVgW)Gho6jjkU ̑M  <鞊 5_+g_GGo~9Wk'{-H_1b!0*ҔfE˶˵e9رx{wO.xK>PdV-8^jĊt<"s٭p,Vo&=^.rKoY%s{^Vx! +& ?ٻ$DѦ'"'K6}][p2[QpQs৥& = c),&1B$LX>hIxqϻ./q>EQnk!9Ǚ NsL.;y 2P"s8/Ra3O5-SUTDX/ !dAm[Z8N(%ExA17TLX&SFI]CJ ]%aY`Hȡ5˅"ZBcN_^tN.RS}|$ӴU2'=O|7}`A%E/mpMWDGqJ4Ą!2L.< 5:`kc *T,Qʷdd@W.ed2~{>ev;M/ڰiKBVN27Gks[ g+?srO"7H~:tN~{~nO{n h~hy>fF.j/]9BmN'C3w?{7}۾ BO߿mxG'7'{c2HD7zW^iY]̞n\[&и7(SI¡+/x=N׶7r'vbxO\~wMRߍ#"u5,y;wDQIU4׏(6 esJʤV ]Ë X=úS/h &'|#{uOgo9'sWRht uMɲ8֐*2m&KM?bq=ܸs_M2XvoPIm,wHر"wbSᓸo<2Qx ɶ{Giz:f˱.ϓ#~oWA ;5 og W=0}OOx?h~=97C&>~Dwn:&q3,/Kny#aIyb>W?נoAyad 9j<>2pv8L`-.9)MUۼxdW"VwGqs@&^񷏰~ҔA8]؛t4z:n&0Y;z`~+9=>%NZk p*bw@uMJP )3)B.N2%A)r1S;a7hnE_z|;ZNx(!i(+:J,-r˅.=2~}: 3L3:91^,UK=Cx`:QVќ &@¢@i>x!ƑʺXR &<=a d%`%ܲ#M,{2A5V! "lYN3e ZB(:@lHMQPb5!ahxȴ 4bJJI @!pl-Nޓ6~Ts;0lD*SOjsrb`8Yd80LoJ3o=C kDS jj5?FaKqa5Sꬷqe ysB~\GUELabPwt+ujD}׀Los|6;\ C}΅ԃ澠H*U3^w')*Iyn*Vn5ۈV Z?vWp]iFjX^V-D B:Ake ڔj?@o8Jt"..Ř+)w̬CcX!b"' p=; H C$}GHu>c”x7Ib,MN$hP7ae?`> Z w* S&ʤl^bbkqI.K pVן4la͊CVWQ@Q~3yD3(TRµWҋ a9/y zM>1O""/g[kKy^ⲽªٶU[Bm{r Lq , Hp5 Di)#@PGA'({aJ3Z^ ΅o|2%QrDcl9U-D'J/4P@cLA["MC,J9HUaxlj^sbeӓ-w+uЃ}(HZ;?90;E['# |7~ÉAJ*͗ {A>݀F#0IcFTv]s'pBO ar=‰83^gz.VŷDI[vbeLrfO>E )|"V,ShyMdI*ʘ!?-2H*L.Vx;.LZv;ǍYvН }'#W\L+|Mޔ @Ν,+I Vºk&oD)ǵJQ5\Tm '5)ln"55%LcMϤ04!||Z/,$B͆G檀׃:K 7/%$rοrgQJ5OޛjPGCQ l簿e|ŏ\3ʘ!?-4T *Urҭ(YN>H=ZN i\yQfne̐|>$ Wmr=~{{}|؊#Ƥ5^[׶v#bż=zͼ\t9odR9cH!_`c#]%0tkբ*skV9|Fi(k=ج!t@MD{V^5j= hoޜ>> =A!L0z]< Q[N8>[دGxtv6lg7 ]"'G'_IL>qtdS$ʱ L;J8]rupPRO k3!*HUdj%(O[L&}'ī.9gys96vsq..j=drhbCX8YsW( :wӅo0ް}Y @,H'ܝSp!}~.ƾn3绌=cxL,Q530[}ίSo.{0{Aw IJT;9U~v%:w ]t5ABK$=~aD CSzzT)1O92OۢE;_2㴕dq~Er04p ?{۶쿊?zò`M#NNqriD=F%^6)'r73`Ĕ Kw#=ל\Rl.])h(p.; \o#)AwśWv(nB*’Vy*ʠ%%rf n 0l6XZC`|̨i'9R#"AI F3Cxu^qkErbNUUӘa&JMX3@GIGFb3@LT;N4668%o -x+ZNjRV:򱰔QT\JwB`aLuJZF*h&y Au|}KC_,!bD{J*JEKX7V'RޡO8 uhHrt HBfzNƂ2f&:]͵̊ V(d*τCu|e . !^XXf8S|{aQ]e,:SWHemvRD9W%`r &!Ba*+Vt.Ӣs (g]=D i?r_;z p oA,%5sʌ4c)9xqcPcƛ;.=c3r{,ļ󚘷Jeɏ;gTԇ3bQѿ/ ŎqNR諭GӦ;54TQ1 %ûK溽c;kz\OFE\pT#P;v@Oc8Bz!w֡+z%:1QMw>i_9+M`bаcYDaYyýڅ;V8$\&j ׶0T(>,U+&<&m4d +{ߘV[*TU(*td؞jɨ&@Q,%3 YzlOEfW z3OM2j6d^Յ M돳7`v_#p$.9V6[3m06fgʀiq Ӎ|ND.=1a{z{cXÔL| 4 }G!HIF5gŒ(DŽL=͆RȞXe a@e_ɈSd/i;װlOe^"{,O(gkx2gI2*gHo qJZL{(f0Tq jTg,"DsvW|8g*$;R2ID@o(J]ъ"˲+}FEO3cu+r**$;2OJg$);TO$Qbfln"Ta7H¤uO.n߳Esn9?5'H|k:I]Rj]7]yJ%ҹuo=KmgBMǣQ+B|C "eeÆ Eӏf|m*A; ~-/F sĒHXZ[dƪV'"8Z}l=% b 0gs}y`^BuHK,vѭfGK@u<ؘPיqC1K]کo{2.?@ٜ?5)'8;I0}ac/1mZ-Ϫ5l¬g{elM% ! *Zf[iS7& U݈ф"9Aҏ>קzGU^/ߎVO r([Y߫ŋUK5>[ T s bdݚVrx=,T&wWP*yس?mI 0O)-lҕr7]zKBfY/Eax=n INkSN2ݟvj$&=g_G>m ߧSS SSv Lzk*#밖k"Z!l<8oe"/E$ IY)s`BZ7ջX[Zɩ90gxtwZl)W8м9_ZIGrENDH'9uRL*bq$1b-LptLQq,4\hN UH 2\İH:6c@%"j#5vDX1i4SaY - J!Mҁ G"tAn] Z*)+%Ԋ $.TWgq\/ea9hN tsyEZ2|iYܼr׺[p%ĕ؞[X"'$`*o5 Ű4X5ZD)ْ~KK+okv2M%ӸA'{"ͤE0M jjn|wQGVv>F`JZ9'gUzL# KQ:dhgm4X@W7+ۋ#8ΥTb|  :a_@X ~+]8]Y#s`Y6QD6m&*9ݔ>ka,NH%/hQ#0@#.6Z=;%ŋOA9Y7 9qL#0bl,WJ#iteb:HNicp(eE4"Ѳf.DN K<H!2jV7G5b9nls5_)] TQm ޺QzȰrkw;RO Ԙd/@0W~:b9^~!-( ZעD^޺gI*M*xmA% G?>Iu@1- ?Aq5M0VSjJ'9'0fF}mu,QqlԖ qX(VE$E:jD#& W^FgN5 ^z DKm(6RU E PU1ue1I/ ] IR2-fȔD$BiSh(r[1-y,*ԖWGanCĄ_` >./~[O(B -[!t|1ziy4u woI;J5d=M=LnJvˎi߄L'cS*9dg?'daGms 8i<֩J?Ǘc4\J*rδ>w}g Mޚt8,7P!i0&1kp2hR6J @[?JA"jbÜnBSi8:_i$´2#x G"W<i`lz~x|?Y!pG;~p ??%z~K{zG?j0…'ߧH`5k|KV<$ܐA̿%*4eXGɓ~|wrBbweDà$)x4t\' x=^'jpxJ'vwm?Pݸbo^/}:…lj}=|L'Jj׏޸4rɬN~0K c / {uoOO`E4'§:n49ɉt M{ȘI-K2 PN ti(ӷ;P&4sfQFԘ7lS9lccMSY| wjyp )) 4<0hs~nhڟGϣ$Qc9>{ֱ ~vhLLfLipj4v* N!Y)5 &L#/Z Q>Ƒb@F`G/㩦R":xyM"h\o^9C} b5v]=Oqi` 6$䅋hLI4x\n\ Bb":s4ngW5X[ExTa[~YF5sSOQN= ^M=m9 ?Fֵ62CН\7,m1ŇǦEIdmְ$ SR$"EjM 8Q?zp1_ܖ/ ?izUr( X9@AK6s\7 E9g|lnu:Vnіp !U y㭈Cv2z|pdNjmWWԿӂ8R3Y1E!S FmYxz+@='b x]1Nz ԅ?7ARt f`;3!`5^AGTs)Gs613u ՖzZVx 3.A:hw6\H֊! ZP^M)&c{ pP~17Yˀ_N5! &: c#whCOݡ˽J7)$O[(&:/R?Vi无nnrgs*l/D TfL/y`m7/zj(ⷽt*2Gy=nӽ^ۉI9˘ӫ ՠ*Qz(V8\ոsB>yׯ^%B(ShX daƩt% @RD*F^VT~CQ lK+!й`Zke3aMf$ $ҒebX+ 2 <B_VZ`SmoY)TsʬABk#E$6*=}L b~]uB~~(y,A(!9М|aܧߓ>* x#ߦuw c!,?^9%xX%5}B?L+ g37X_&D0elk/`$;,]yH`b"C'=hܰ^@Jܖ$yKiUEw"_rThBSc/^ڏ'rќgXډΖv1K|j]xknH*ZX]zNou$L@kGF9JJkd 0Ni؝LRg+s|+y[Cڍp qO Yk[Al珠gUğ}CgOW޶r%eZw#={w:60攞{ γ:w>p6䌿fGrDnܲ*=Ĺ3[_TԱcjªퟋ:gzW+*!b7cn٧/@wR]&E^.ƾ4fhh/zqWnPrIm|u/{^h(9/U̓Rvp1|3Aф3vyHJq /]S1oW1.j*YX'QJEkJ'ں* Sȃw SkW \3 >So\*NַQ WA,Su:ʹzoseNq yY K?'/P8u}hjѶ7"; ۟[gu,ԗr t)T.0&i#Y l (oy"qZ* Pކ1G&$ɑsS{)DJ/<r̋y'[m?Scβyye zq;KT2:SV `M D@2 Ȕ(Q\'C15b1"^}̋QcSt _19Ye>hnW5ѓ-#`p%a' y5א^C^5dYrSβTZL-H҄) %̨v J%fL`ɍ(Jrc'?ZrX{e%wFSR)J&Q!h{ݱ6 BiY+jy$2ȝH4NYr6fK(e&gP$zF/v@oN^Umo;g#視ۈ,u R׵>D$K`|̴IRt96S4uڥQ"ݞuβW)92 eXL%؄ZQKkpcdN:Cvgђe?tnܑuakPP% ΄H.4"$4ϘFԑ$A[LhUw6_f:?Q`nD)v1i|H (ce$Z H3-o ȩPv,i k% .~:2YMs-PlnQ 1W;]џfw%:T 򀒏#l_FQHCHO7QA09䈓Y`MiŻθxd}`GΣŃ 4e2l=jU+^ݚu&:D!j"ONl{½A 1d)A( $u푗_%#>2 ;P#"n7Sn>&_ xgܺCP{?] ;7kjݣ{7v~[Ҳ&b]sOβ1iUMaP>yczTyK m㹥M_؄`(7xm\{2VoSWAY3Ŀ=7Nz*N?NQ6P&I8\}O_C{;hYR@OnkX;*<hԷO;}Y4 .({b4v@ ihMf5ɐt41X"HH)"&$KA#qJDqYA;ck^0QTZ)Yt W;500%>kH%kg -v݈\PRARRrGT**ZbR6$m' 5aHJ*w qʸN4' a AJJԢ7T#(\guD7Jɢն7TQ!wi@h'1۴kyٽВ~^EubcGoF/ǚS3WOtz; -_QP_ۇW~X \R.֔]r tnv?? V;FʳBaWΕBzu]XY-I (xQE Ab":s4n'4vOMEhvkCB^n+Z{㺸zxqz;wT/?OYf'o^}Iԯtj5$'szgq\4Ui67~"01P[ B5 rGSw (w4-oƁP 0gEjr!ٌf#ꌟcHd u_Kz_Bb%{61f:c~#x󦒳T;c$aGSaX[dSfV2Ip PVL5y3[Qik K[;?m("x\m|3R.G4+J4 {J3d{JXm/Ap__;b_ YL#}3$\"K5 ٰ'X:ن2XbRjJj#hRB̀~ ;Bׅ?OR]f]U1MdC^ .tpv4" }{Np>LO.3YJ|; k2 Byg!AbX'B>CZ8=e5QU93۬шCˬJ8+J8*4㨂* e6bmԶ!}Oz2\!o1pHkM쨤M@ Gi1 $(Kzm4!8A cN@JD;ńKFNiW^.%[C+TU ,uLTeÏS&12L`}5[ (hOlI=Ƣ$bH3iFb4!BeI 8%YȨ0ֺt(":=jYy3K:H,Ỷ)PwڝqN*JV`NXd%&@HMly>Ǚ؝8($bٜi *qۧ*Ϯ$!wC4.b|4#D?]RftmNn L*j; y+WhR*}x]j'0Z鳩kŅFpW"Ec +vNE3hp{3{9N.&I\U{*&=5 8O^e%a=$L7Q`G1)2$qc {x)4+ꅪz(:})_>_|-sr# _y*8-G|0&>tuaP*\rZp礦0ɧ}qPU'EkH-$"e,[p?zW<ڋr'؍zUfSDTrDuEEwm_Czg|? 6A4 ]ko+K>I߿]I^,(-j~f8!3ɩ\҈ MnĠ߅<[!uHHvfG)m5( 'cE+R6 D7C_| 7[xY"< ml=#`bn m\!&m6[4Ѫ9#~[ 8][]i1@s"ۉ?\(nRj)lzl?ޡ ~*^܈)B.'Bډg1Rui:Ԥ>E^7Y|TΣ7pނt7Du?Q&(ս!Qקּye1n"9~ &5wo )ҫujOÇ*(Q5pYRD Q# rZ F~b|*ǟV&YYOѰ7ln8r pb6\O$KԩS/QEmi,RN)le#9ՑH L' E1Gƾ36%}Ƶ< {źWN:a'u7輔x|:8v~o?݋W|Ïs~ӾW ?o:pzvMymo;_/[4z?L>yLY."}~g(nfs* .o^baȾuo:;O؍LuLvik^ˇli%`|kX9c2|d!HZ Ʒ/%Ǹbלd}8قhKY:F"AG!+vQŗ(dV#Sp9EZUѕX"K9%- WsHJbMyx?=E-1s/A V`0ՏWp_+~>KQ#Y _ REB^|p;DT*dyEtƛ&[5 =޸B\D+Ak4Kcۘa)VbPi]E%ž; q}@>Qf:ƢJ#+H{E Zg382Fwa^fG ][A؅adFeq zQ ^D!1礆jEE ɚZ]k8~i[1t+֤ ֚BƠ׺vDODͩ.IЄԲH_PʥB03˱Vʵ~Ѐmpގax BAq_(&2hiKҭ[Pۻ\?޲los/t(C\kfJᶊ5iǜ0fN,6m,q{$&*lhwR* d,|8+eKBX' RwAtNlwN8"9ߚ]7ǝgpV={wqu/Nd5z#h2KG8[ݓeDlƢ!_ĬhYyvCRq&?& Ff'eoqcj~^FkIŸ>j:GWy`׃Q;'ذ]) 2Pk"^cB ; QN8o$"ErM%!bP1i+G\XF;m,+[P!-<$ŽdLoy)-Ėrdãfـl9Ө!_*43^^U)! ϡbq !STU,%>mݹ [XϭӴ ^[Ȝ3޾,$>˯I?vS㦗 G.VӆI8!>D"0~~[%BXDPĴ6>>v6Nr΍#c%<)W蠯Ϗkg2m:ɓ=CFW>+CͧNm 1>BD!iq]8\2=G:H>#㑊2cA-cR0aQ4ҊXYӈsb"WȵP̂elCdli,`:H8cmǩp) G9Pd7StZo*jO t$is_=N(l \ig(< +CنJi?XXZL*,˜MGTbm%'Lc68"-MQlxM$`HhpD%Ln LjS1p$GDL>>FF i#X'&`8~嫕Qb @^( Z iHahY$CIAPjL˘Lύv/7NQx' \В`0A{u$C*&0'tTs\jDA\6`car %kZ ,v!H!@ T; I0sRNhQ3P -(*']IkvB@?CE͟AjcBGUZ #I#)8PfƑqJ"%I"q셋V3b4X|f)ukiDˈ7wDžQYvn^fTBX*P@$ *qJǡc@%&2FDHRȪŚ+Z41+>A!,%;naܕݍ(XиqGrn Q #DFU)@Y!F"a *$\\Q0A`*.fH˖g'u4$S+-u*_7RKEUiJD NB`à $F }8X,PWh6د.O}$uܜ/WMԍ-:tX?U{і`_.SC& Q+ۼx K4B hVk1\!7)eJ<Ei2fS@2 e/HpA*.Ai֐_Z!R( GsߎH1PIX)T)_aP&0-{YQKE9B+(3`(lf39\/ iлXuK2MۦMvὓŐUZ)k609?bB6b* &ZT!u\ɲO7d)*rTr$k͛G“͏T&-s+9Yb\*ww<o[LPfh<˳<šlm |6Kjht> ԻzZk%.f9kEA($WF>%sEs.O@,`o,V2F!y hqMDH+1r4d]r.`^q^KN4 * *(fPGQR`qlUԛבzW=XzcH|tM}rTiL^VS4&Bz8WN'6 '"݈"92vcƱa%(P#!<|]}RO `rRW>^^jq1Ih^82//gZ8EWٿ>שּ%$⨦(ee:ې^] 'h}offJܕ;Jp~ܓ\mBEA L,7DN2]s>v^{cNK҃e IPv i< afR6W%b8A En $7y(g[XXVJMYZu#)9*3MXAg( re{{mygrm %#a)%T#_dub!eN)Bڠ>T lEYGGE'Y)_bs[upX{j(߾9}l. v5kݻ~uHSIuoL#$utI{ts{t+^tq@+ޚYKq:.C.lx>AW" uQx[thA1i* {B~YƳW{B@= KuSU*Đlϙ-?fx;Y.Fάc45hy˟Ѵtv3*#.49H(#nh8mzٙJA(bA?Z.䎘c,m4%4:zPJghz)'Nؓ[|v:1;K-?s|'k~@q`0&B*F,!Ad̅t`]h78#DEqW7)m;KI%+Z 5ѫAϝѿܹήF=-4wfl_Q嗗nLH]{Z"QJ.瑃Be-LcZU]6!z[44\F"ά6i/+yoPxF!"DaZ`ВQ (oq cA<Sa&-y+^N F=29*0 h m˷߫G(2cJ4ThED2+tE7cf͟=\u{UiC0@XI b)ױ41L ,T4ieu{V EUf?xb x4 4B0idX0 ghH`d k]#"v@C%Ny6o3M'SF8'7I>]{.mduWfgէNAy8B]|k]ܞ[+v]&Dϊ醼C(a5jך% \B%9^Z z/ 8-.W ދYv1Zg3rY[2ֳBW AWJ 䳋$!j;VbT'^ |:b RDž}!Ęs[ѫp}!5-~/ S藯mNWS(ҹ3nio-+:hw0chVu ]UQqĥMƜ>D](V$AnXh$kU1p;ۧ%B><DرJAL*nA I㐨(f+ )Q$*lZxnXkWrj>ԽLbee2bU@PIxhБtY𨑊#aY5.$G6ʵR9@TU6nj3ZI=VWƱBoW ihw*^+{&NAHl  0$+\шPB ,<(ՐĹͪT9){և@?XG_,"]j+.$'(VwɾHAD`3hTbbV|_1"BCEcl{30Q1nBU,Ē3wʄ"0q{] >xWl$HDJgPDpd1wf!42P0?RBugq=}|_˙PڥvO>H;[ _CwMOk^dhN[UQUBt\] ti|rtBtq\+]~R;jPfjիDυ-GԁIM@_(*E&E 5PV):PU6)ʛ5PZָ$ձ \ $3F9ؒ"- V8V82aHKnU0!g Folh䢷W9wQp[ep[yw=. 7>wu y@e7 !s3  |.q̇s7! ! i.nvAT@0*eA텣sl,q4° YmRV/_kҳ1D%!×ڈa(UPE0<. 30dBsGs!u[-E(CE:[EKj|2\c/^]F\ʖnǩM^j#ף%Waֵޓ޻ i_a ht D{4,9Il̜Q| izԡOL?ŀIc!%(`jCYj #L##.B%#Xm ڂ>9!k';|A(ݝq=*VN6P#'I0";QT%*0P??̭wtDbԐ?,m:0&;rA[?\ir: &?|}~{I x>Lǯ`uqnWmt~㕒 t*&d4xUZWo3 noǯmOM6KNo_ /o/n\62KW[]M=WW 69env^e`\㴐 1807} O(4گĤWt6.rWoRc 'U>~RGl66=]YKG:v.A2=nӿ?:ܺ%ޟ߾} ~x7fH}KMݐ3bDRg~z\_7n&:%yly8;i:%N?LF;:KM.ğ >hpˈ6 G6Wx>|:[Nf_MS<}1779g .>3W&0v4s.?x?a`z^(@g//4YU^_NA7\~m2~=du.wGxgЌL)\_šiN4/ Ts\^]=M.'Iڷ73hsݝ~~98?#=>e~44-leT ]ji do:Jt߻'̤̖qw˥Of<C/@/8OJ'k=:Ѓ43˻/Z ;_PHO;MiW(sܛ~^]/:J[ȢucNw_G{&v{ PDN撫B_ @}%ܲ_ᅰ甭bK^n,1wSreq׎@NCh*!OM{P'nPg:W~4L0=*$fTGvP[tu?OlR(N`͌}؎&_/0;6MDޙ< q+/mLlx9 ̏4?%ɝ.j3EYEeHExrtn6ʓQu`sN.m-2) PH[.nnY-_6[Qĺֺ;!Aq:Tzj.r$! PqDTAHÔZ(T1%#3@U um=CeUŽ24.3'4Xn灈1DV"ҲXXDQsI2UĎ|CvA."=[T ݽ"5RʩYH- ZTF%&k R(&5.6 ﱕFADd? .5Zu]њC}Ç-eZݲU}Ml w:L- VQX1"tLcH-9ؼ*B;h :UL]!p=pQ!8" ھ5P(*3<';P\DUxǓ0فZpsa9{FS+kmEb-< M!9ZYJrHF7h i]F3CϓEnu.Ѕ`$"MKyo50`6$zdEgpvR kՄGpn`~d`-c}<.+]7㻡K묿v>" ><3ps;v#i0fY/pڲvjT]M#Zih6-d~烛\[*ZN >nӻ\|@VSE-\\ޭzw*bHcl̘wJ$P.7"YIlXv)*> 2-JnyТl5)3&X-`a#fbg̀q@( ݲfsL(`ݥ6ۂեrk9 8HijT wY.M)*hWZ%ʔww<|ݓPc52nI)hHwi!ӤC/ E\!\*?vEץvV#MHKwj%ϝiV4 -mARxѣ(¸Iō(G2𶃞= #zxwS5P%Zgm~u,n7*BJԞړMzM+Ԟ,ٔZTdsU=Y!mbxKPEr!r8rSfoF6IX}Q J֐b~%ɮrZdϳ q,L Ba.|%\!X~:sϘ '{7/;_n?s!t3^sf);&lgi,qNV+{Ud{Xc=<39ʹ=!n G*3L\vt݃I'NO`$eNƝp ڣ(PB +]/&rݰFצk׍:*rlXzzo'JrWfkYàd-ʇŗ2IJ@M*F8cgmVV&EؗQXbֲetzI̝ %ˇW1Y (-zsVn5?(z0:1?b#-Jx,)r!GHDJ(EZhkdʴcmtrEB1oI*#(ƍ‚:aaĖSlW8`UP# Reœ` I!L`q 669%+$bTEm 0.R%3|_Nߢa8O5,Jwޗ[1&2m҄PcԆ':Ֆ$+C\#FR Jm)8eKoOv`26%Q Qpn<e'S9dA( 75{츪P$W(%YhnɚBfՔ6У@ElJ"UF ҧ~G4gM~اdMV̭<(&CU3$1mI7g;*JXj3{o]F#fHDdʣ"BdE\#sԖ:ćD_UDY$r<|ţDLЬ &O'МyA8j5qꟋ1k6pyڄN˂{?I.#V\=ե`uZ^-Nc@w}o <;S!1R@w'"rݙӠ0 cFcbOrqpP>G)̚gNr|0U{4GZ)qWo"c&PZ//Tgtx5g}3\Lz[Dujvww ݷЂhS(d[R{܍jZPKE+1}bsatV@S)wf[^ȣ7 ZыxW+R)CDZSTbkCE& %,cWJB 1 9EъT'p)RaTHLPvw۸vaB-r?ό61SG1!G|nQ=Bpb:EANJܭ~)$]JĚy5)QDbE .TzNtJLkńt) ӨPVTyY.A)LeuG5n8D*ۻ0Vڤ;N~Kf{ *PDv%S-*&:mB)RHsk I10I09G;LKc6 `WTwhBBpƒrhMDB,ϒDqĥMbڴT[>.c")w RIUJ8JSA!Q[GE 'HKUJYSl=T@E):hT6oJ5NVFسjPHɽ3ADžejiXRuƸmc R<Ϭ@({6`t7%G yʙyG0bQ8 ChyB"xH%-g*n]hޥ優9H{? TIDSˉc'Ԏ'9f 9n0zmk+s(arG[]Tjql?J tkWJΞK1baVPTC)ewp>2(@(RkrQbx| @o_D"RTDfQ*N**hצu$[BmՂ|W[fTAQpo;Z8Vi|dZ;z"8nVy+b,R9UB'4Hum$C:6%tu)U7kPD^d&Jz3Ĥz?Pj=S]3v Ɂ5$+9%i9D-lbMN+: \He٣,dfj2uW@U*O]^D:-4 tVTKp: p L49@G+3}P)e?򪑆 yJ@ƑY{TG$jHhgWgHgYsAŇC3ؾXhrDX"-WРƏ4D%F9R(CNkfÀQ2D+K8[-F8*lR"N6M#5(qKljSn-$!*!@ ! p$\ZHK-zY-U>IEB{HmPd/Y,GJQNHé3&+p9 MW~b/nXw.A~Zeg6𫣕shb"9gH4 =!DzŔDEzeI@ą,W~oP1(\]Nn8lM~ !E㳰-5E ]g]Uold۲ݱPq1qso쵞d.-.^k`OFI@K?O s qFYDY"ƍIrP;;7'NO|w.("+mHOߍ'Z7ep1L.LBBo1;/k6}|xr}koN.`4L>L~ݛw}旿wIx2,K~FOg>:_OBo'،_.ms99x`lݏGYPnBA>>ome\y.yoFΝg 73sx/y~&[!ہÇ.?z 9s7>KV۸N=~Pia(KR=X毙[hI<=aA0P!#O3u_꿚.k[ws1NwO l݀]i v>XczH*r0 ކAC3w=܇pۿYŋy12?ؿG:_2?]FW}ھ`k#adwӖ3ƒfoUSQSg8.&~-\̷g_o'o8%P?|*NF0)g&_uϳWzǿ^4̴p=?8PF^_eB~?ӿ&&46O./'_x93BU0ٳ~NAL٧Gc;:S/ C\]=~Lui_ݻ[ )]_} Z¿-Je{ϿBQF}BMeY&t,U/R?r݂/!AT;8ƅhO]]M /ϧZ$r'ٯ+ӻ~&!wbYc.Kxȭ=&Aj)NcR2rJpcPIHR$R8d\xmZtev |] /eE]W3?uNm{J5Q4)=.Sʠʈ:Tְ s *aaPSk3֥ǚbiceYdQc?,3(ˊz{o)=% U0opA9)˓Æ Ϭ3^11\FkÊU~:۪X [:)-Ug9J Ns{-)@(_mz ;gSXN5Gj]YNb+em6qb0a)NBkזΗvg˚2"< +~tZkdPS}=W,-y.֞~ǚܽ<PǸu'[uEܺ nIX7Q7ΦہwUC . UCU[YyJNܪo]֮l^HhX#3ߞYp |Z=´K[ MPPx閏Fz, ~FN˃|ND8*Hzѥi$R+U)z1Q-tgө x8QqAP&BK'Z:mm Xj\:Ifo}\%BTN؟j薚 ՟/@Lfz!Ka`TY`'&BsMtB6E`+INm&*UŤ͍u D“⤯"uГ}1!ղTB0KW׈k&xwޭ:/kU`ƬHVzRcA[^R .L=dq)40*v$`UB#-[GS&X7!+&hmq:u–1:e;ˬ+ݤ)49͍G5پ}] ܋=B5~m0{t"|lf+iZh{dbwVMkI hw=ش.ϋGPWu~+C@WK~#CUaDÏlSLHޑG++G,%ƭ?VM"w]qͧvej$5<#Kj-o%j%j%j%j|KsʹIe:Pp "J4r3ְT"xT8R* 6NҼ@Z|oJL3[R,Vkgb\`*nyX'li2=`)"JT45fmT:dU5CR K j,UqbUT8 ˀh@-ۛl2. 17QGezz:3Pwo8/YV߿zxF$衖y#`&8@Ry2x%K[Q6 ;1BƗ炤.N҄DZH )(ʘAv}iޠ]9*-#)~y{pd2J VI[:FF)H$N-Kq kMj żT2S*[fM&"[ x6&&Xi:P)fŷy]hbՓ]%q{b̪&Dk$=Y2Y vV4. x AI35$fݫs" !]V Ոt`WbJr{5u2IRy y]bܡr݁bS(s l\oq(U,Jtqh-+QX-"F=~fs,~f)I*jVa:½ .6ɴj .@L 5T Ny jE*X|֊ W ,L,$ɀqr]m)QyQ7Lsxj8FTa3*eIcQR9P8R$]#V܂wvwade?Cl8gBʊJImǨ{JG:?);_xL@1*[Ĩ[Cstt`Vsٟh')^TGs_Ԥ"m_ V1uum BrQf8bԛC"Ec9˜67 xͪ@82.ؾuX06.P<3[8} |Vlu"@v$/ ; ^;tVކ^ކ^ކ^ކyy˧K\`&(S4 ZbX8*vI&H gTLW`85 9!eQ[aQrQnLa,XDNY'ָZEuV:45pSmƍ370H~6Y? _Ϟ4TFW6\_ }ul O[>:ꞽ^|2}ާ$ }K4ӻg0+!!w`R`\_^|ڇ_qC"?e<<zdO1\ Dg%ww>?Z@4bz[7=QR W&#N9ֻ\Rk0KulcML<9ʅf4` W)$TD0cbuP-ITm&Eh:_CǛߟva kIF'7N_G .aU7A?i6KV!< Z(1KA7@- 8$q0Ut3ϤdxpO.S Xq;]#eײ_N9~l߾"O+dZ*[-}Z] % Jɺ7 i*Z.Vyz.\(TcM)G|tSN,?SݑXiN8rW<ۈFILpdbcҹE{NB]* t(K}aQN+/|-_e .ۯ>}dWFAeQ  N偨jqOj|>L b ͖1=P|tx|L JUTɢF*!YHRN`bĩ Rb`6Rl4RV #=]X@zܕȱ.a>Ֆ]6K :ʳ`gڪh MqWH:o0craVCzGx䁸Z z;\Pb~O*b9plIxW@|nUXkZc(LNI-|?cY);}g_";\&6JGY"^'MQ'Q* Fׅ0gɚwW΋y 9rTB23#QfwH)Ţ%jUT5u$\^b ]ٯqXEF,bHDFZķI)ة-cܲ):? 759*(ޝ܀ 7_b䋫>WaЃwOnfvQ?jYg fV'kfGVunUijrC:1 S[BIfQ:f_ IGVw.75BT7;m!1|04TQjب;G2p% hP?)j֔Q^S@!O7"\QH&Jlj$bQ"bq̬q8,=9ۥvY!FO([wA8n1>HN2[?'-| Fw ͷ;װ}5>sEmnr{'O>p?-ct : yߥ G|S/9'0NmSU9ghY#e~J"jZysj;D\zv"@n`Ldȏ97)9o5EHpHea;K6wBC-s'DV[E`.n=+ꨬC˅Pq3DҏCBъg,ڒg} `F Hxn>܍{v.P/ҿAard(qK?x?0~3M"kaKe5YDcgE)r* k7UHqiڄ:pg>o@菋v44O{e轝Y{(^"۞FyٛtA{#[zQ"r] "~r>fPH/̿ŗwZח>όo%R&`%e(eR\7{/ա ,Зd(,K;57Gشebh(4"R j*dgoOyV'eyV d'4Z^yh:qlf2'=wsJ ݦ֟-QK /A"ꔢnOQ$FzRs8lI$G 9K9 )ϕJ)XGnl0"G4=#=k1EIT >jO9(IJ,O6r qaBEEl#F X 4LhRT[X@ZI9ґ*Q4jܳiqW pUp*[mbRc%(E6a9\ÕLcϼJ 0KYJae@N`(e{]How 鱋<۰x8o鎛/kAIן#nVwsL&N|ѧnDD^u>zۅd:[r|8,/ &ʿ34ُ]U;%X<, 8[a4Ge{ [>+"f7]XШqe mIzFC+.w f]'Hdb)&I*(?ƨA;G|THXeS{8TJCU;ߑ]D_dv1n ]= -#1}ؿ#t =a1y˻W%(# )U3e¶Mm ' zza@ 2;LV@߁n1s<%΂)J=AoJYX KK}[ 6lJJn*TZ*E0lP1]STbAw.;(W?q8h ,*I` s+n)fRKۧx}S[ RLqHJ> ,,)DFo TUFBx>J2l}J F31)b sdWٿWs}mƉGKcl*҅%` F|oIEHLLĤZԢע2}Z胧AmWwf9Sqsq/\Uuee<rrKy|~ȝZ j[ݧ EHIf/C8TPs]X,1z\w̛qש- QT$B5&'2GIā>Z6 9(Ws#Jjn# *#+9DÕi8^*̠l~fr3sT\t3?Ytw3 4Xwϣnϛ`k@Yjsx'6ġ:V,+~u;רpӜWYyYh,IF 7n mnq}IjuYڻz{2D6|lYilz(vz&q"M jDV¼$L5|ix *F/V\1/\qˀJ~ j韃t֛/on_2*=:uIoԌn 2QYj9j!&ewt+j-icvOc+ }1ʺ/G C!=,|a-2c?[zIZqjV\썭 y0>;iVsi{|jĬ~:ή |4?gsf'>X7JJv{M+MQΏt eS aYt'E',^㜠uñ# 2J)QMs(y9&Z/$;rfL*0rWD)x%ŵG[5JDˁ#QM9 RՓNk%?G>(:Mk,όOyupU3oǼ鉙7=󈔜KGaATJd@QjIo5{ 4YI29CpƤ%#FFgi!k{I8>M-(kN^D " UKzDJ@ɔr8FX7I=Ɩ E4x*˭r)ɼN4E\s1Wq C':SLYK°P kK jr9;Gw| 9ILaBBI(\*L#֒׿xo8caۋEbt,zywk鼊z86 Xt*D*dPFie)!"8ġZJMǷJMa *5&j.5Q[ Wn4Ulx;&='O7-dfݫfr5$Wˮ޽Z}9"̗ݐtY='ul~‚,F1UHCAQ \) _|id"C8N1IRQ%!∀OVb2{nRr=]ytGnF>/10}'*ߦPkDs&"ps6ޝcuot}=_a0oz|%斬%wvV^Ty8TM(>zP~Z:WKJEǕxsABσ|Jȥl8j"ܬǓoynW 3IC;t$Xy$pqF8k,CUۤColIy)4.}I'8 ކU罍GӬC@U4n~w|yyb81// [&!l(ABOL+=0o81}}"cd1$s1x7\ [K|(З'݀q(H.^bUȅxQsS'^R1UAn})v/İ}~*ۿB}j"5Cp¯ b x+kv<OZs6x7{ci)eÑCN1_m]^8[F<7ii1?ȁ+i%RoaNPaQ8Uq^n ZlX7'IJ]IERXNsb.%|K5^ 7-p עg& ؐV .Flj!EF鎛/hcnyzg Bߑw7ϻYoѧC3~_N:LgK02 )e_d_wf:qy3,P0M' iE5fΟffGQMR`Jx>Pp|AM$cSvStO& O60cw;=%8?&c0Yl`9Hx?Bh`v>|eK'aʆ&fg|Vtrכ-|A[𓣜O 啟~2 \ٺkƦ#ri?nH'J;<8gщwLݣgOCcz x21reXF(B ?ֵǢ#}tG>wYhcyH1ǜt=ÄŽH֊ށ>Vb sNv}GGJU)F [X) +w"4sIZtpJ[EA)f?-v/!+.yKWIPSAWU5^9PQkW_B]eNx]#A s1(w8i3[m/.~6xzX͆6rs#V _(˂9K%:B$ם! ^UAt Dqec+-ȨB[ j^ɫxԑ_1_,SDϋGji5*-!AjuUO9p8;] F1Eٷ4Bs hCF,|;jc6*֟-G]oP'PV/ ?_A&IM=ue.Xw[d0qpGbt+DO pׇH5zqcX=q$hn3ߠ Y]jGo:Yh!c6*YG1nѢ QVO:v(oxbR"b\p@-L'uj}a VЖDRUowi.5gD;D9DQmcIR')QU~&֩rMmE)`0pC*v߷(AK/@===J*,WxV E:p\ [%UIRWjn,UxPڔZHZ_1Ni^D!Ο_&cpy:ޜRaR\H\s+;aX,}@UKɰY˒g~>: \#Gzz:-+,SZ'n 461t{MG&Grmz}Ims&Raڏo8mri8xY컧uʼnK9BH+-.ܝH˨h=DPwJ8iFL 'g"QZl7ٻ$1h)KӘj?801dWhc+ї5r>b%A'rmZh&BA۲TCG ԩ$S")NRc2ţL>,ߤR9 xsi10 *#!Ih7c ۜЉum0oK@d6mG'J }!$J B^F rua_†b3]cI> po'y}̰k&׹0~`0>rQf̡-g1=*=h.$*V/:a<乻l[ C)d58c~UN_'؊j>Sn!d.o3 ujٷ+*v:b6<\.N/g/.V >LV^^\w<}7~͗ kL…O^]9|ɕ,k2wqj5 W0N*zq|V.USBokӏco wVwZ8v>1o%z2Շ'gnp=J 68cKSg}'rPa}i_\l. ' hCgb +8kNK'ُvZ~ͯ&śnY ˻FӢl:/ dY"_m\1P_U?}zCg;h_q>y˶z=~ᇛ Ǣlջ뗋_780Z^l9-B]x9Zši7uOMq;g ~4kQ mgW7p}ysr~5W}>42[}]{p>AЋk4V{3f0&<{={++.-|S(Zb#+zXᗫ IjM1)?CρQY #6P`qrKTU p |: <-`bNxy1r">V^u4b@1+_VCnHoTUxiip 8-9d@E,,""7%Qxp!d#T=tg|*?>nGou%Ưxn|#0AzƣEʻ{W`-ы4(M瘻H^ *HP^"}*]zڢkϚ[D#y42CTt8O}Ν^gԝr!=K49Q3=&BblsMHF<3nXVJ*u,'Ԗ y j3YjgbV1=xh ' MRI4$v#X(dj\Ap!ZX2 O@.dP3qSMVa-(wcJТJͶl0Z1D53&ҹ<<,93ro{kuȊX׶.#KŴn:h"2g#my<qFx)?I)jh'_(@ǂ{$&*ECrE.OE٬@Xʎbg^4Ra~jPeḟab&!e{m{+O"iN)M(_aaIAanN΍u'IMXLkE^\ƌ`-4񈛌%mmLo?r(_\xoCQ⹳3Aņ CM΄FJr}δ:6`e~,Z֚]Y-TegVnt\r7խC}PPf!B+K$gLy/ׂ,q E2ցy) >WJ}QRԗo4Zz0 3F&"#+y>B OC~֌?$˜n\{c%XM/1h-8u#Y\3x?@IU?Yuݫ8뉺96J2z>Ca>#J:!-00>0b{p98'&"'0x`FImk4iE|8A0EѲRec`∦ħNj2 SxIҌXF5:'!OGDRIBV=+yJиT0I}AU(5JS?SwψՖ*Jkr~CQ1OxA(&c4&(vk8Gu8L4 vďH (ɹW2%*z97ӑy[+"E` k*bxf%BGffڜq x$6Ur MY1w6q^\yD,x:J+Tu&F{bDV)v4.Bہ*,2"qysS\JLgQ'as*eF1oU1r',Ro S2OSI#zĄ̥0\6L kf(MaZZG0)?GLV')Y u~gvn P;R^p/. Bds-dJcY1˷MA*'Df(fL&rj#-R<ˌFY&%*'rYYS;5*(Ǫ%֤i7rs$(G],D6x S qb>(ws,0ܿ5I;n̊UzWw; O-FJtt`+K #vih~ ? K-)HՎs`.I`jaEtzPATsJApAVYغZyk2^[`Ez,0!uޛüW3Z?vCDZW; l{@Fܚ{EuUWnTxmQ<4mNmѻ^}m1{,$@Ű>(JtrLa-"J))V;lnꅳSI΍RL&&I]9D>TF#/-nrR1SNZ`-uJǶbLѶզLVvZpl7-/l%PmQR%rJIJҔTRULQ#OUTۤ$5Ѓ6DmruJaV?ô [ v2pYwL:RI_jQW\ÿ`5MŘn]zK"%U]KYy_RTm-6LiZ{KM( ~զ\T*Lgo 3 ,P0IHVXEP/Ɩ3FsN^k>-,|w"eQͻd .OǛDta+.<4hϠKd.`Щ_f-ٛ웞F^2\K"l=` I.ZZ3>R/ "QQpT*bEHMh6y4}ZMl6rܦH]ĬSd␋3-5m"/r y&bS5sM0T;wK tRĻl tʈͻGGz6,䉛hMmy7"* WnĘN;xvNu-|3һa!ODPwUdfjͺ+ԏa ?sT#z|RAwUbɟUOBNl"dW'{N ʆE&/SHS'RlqRh'kg|0yBHcWL"v;!&hA)rރXjiVX놜BZ"RH#ET/ P1SRP;zș+LW+1Pi$ G%hVIE"֛UMUо:γݜ7\x&Wߘ/N/ޛt]^}"bM ֑3$Q rThNmzSC@8iqcNZ\4\.mȒ)9qԃ%2IYN1IٙٙٙTP" ("4XR!P@eaƱA&Dw:bP[-cxœ[NAG`ID!8P(Sc* A!5 .ZH!21-6E!9f^VNQͦ Y7u';qn " "NW[懿^ 'Z$dŇ羫q: a'd^/UW+N'5I6Joک.ș'0Wve]q #A(".BlҒk-cPfz 4!֩QCۡQIl] ߵ.F$ l-b%oiD[ytn 1ċ8ə)l~|[}|oҫrY|M2%7gy 2ם7M(gw z}Q7cd ҅Wj¤gn-g/#k"}:|mLϗ޵~[蛵Ykk_(_BGAOt} q~At0>]^&+^oe6QU/_? ʿm:햃_쏤 dEH섭xdחj};<YnhA|frM=ƚeμdYkrR N$^^nWnpWJ,kY[}~"$9.6|gdv̋&&T9n#@;& \9j?F+[z7{a+ `9|<O^ǃ^pcM]c,a0^wn?n5fFBzLf?lzD%y !=/mPI^Bm(liv%m .D}<87guS/Φ`i8P2,OHPD#b̳d75IǽQewOñ`wrv8vmy @WpN:/쮤ey=ȴIzUvl×K;oZ\ᚊ]ƵVwפ$I,hTxϒ;X1&:K~cyQݷ#Ah'ȓvdvImE%rmIM\_,FJG7ӗςMc$+ S V5L >HY ka P]J̽%zՇ;[åanvڅ6YVl]O擮UOf-j-!ڝ[vw`6 І{x̓ C۫lni(]4q5WE&K7.#5}֊sD \Nx p?}"{6zVlͳS޳Vk~ t[ f#| ڠoȏj;NWRZˉ*XWBɩ=h'8yRO~!/vK󻷮eW}1SM#z'DݢV މmVJz1EB=Q_(N N1P^fR^;zPOu? EO~;) tlz—_oe(+Fp\0@0 T3+.Y Ͳ Ӄ +(/`z~e$ĨÙT%}=S.uhÄ´pd$HsK4Yb}_J ` $b7E`JVPfN?^I ^v\\O!4'dg*E39Y5b&>^98p =5 | gϳw95ˡݯ4 ]IC^Ob8{k]p 2צӛa!ym}U7ƙVwn$}(tLBc! Hd cF+F„8Uoƽ*3~A GG8ȱ2AaY5HH>)oFNkDc֌\#6E#}%%ܭDnƚu|IHLj@Wk`CHkAzcZ ֘D2$L)}ˡyeQ/ᓷ@M ! n{^Ey., i_u["8)N@cR,dti |$Q.ǁ5"X@j#X3yXw7bY}Y 狟86ۗavW7efgB!3ؖ=v v_ʷy2|SM)R9ЃB Rj/qݵw-~,n4N5"_ >M|d Bڞ\c .=HL>wnmHSE$f0~Dzs(tBy%mמXו!* |T&=aE磳,X8i#ZT2qW0"q "F@)fw/}}s6LW8tm&krw+6({ ;>y@0r* 7dRz\WC{@,0(tm[޿:hI&w}ŋ=T6z3iZ=~\=ecO^ۖwŗG9.)WT˱ Ib鮦eo_76|c (E;$Fo8"1o8J- El:eI ̉zw85$>]i`{&MǃѤpA唭s,sg;KhS@v=NeTI*c)TNNyq"}i>2թ]ssgל;hZ죦̆Bm P˓8ȭIdp>PL%Ues@c iU >0qI0X AhUyD.5GJe r+i‹; `V =h]kyQ! S!)b~:r;+ !}޸RÎUV:A=XǁS!5O/3JO}C$$C2Q׫QRZ3'#>ģ Pfoo!#4ߤNsX3{ zǚi1}&U*m+{SӦ!&t 42J{=l4GW㝒mH: LΎQkb%uZS$/Q#kmVr{oDIj`2B*!¤Hevĥ rD([L 5~kq/9eFÈ* 2@ fTbM!+ֆPI Vg0;9dqDmopWF5v?tZ4'ѹYu^˫Үևk4LӤԻ/sI+Ln$7Dj8Ldr4F".KkmnVE}X6C&vNx줝wAodI#RN} 1!SB (Skt`=\4qa (U,֗&5Nv cW;| ]0\Y#Fd0D;?(zf PB c+\tq? .=dLpS.(~m:EeLVzMrjx'8A//qN߼;__Gp@/WwZ0N7=tTfOdhW'iNzϛ5< ~?<ǽ'gغ*J߽$f{]7nc3$F\ms\H-< > 7da(|'Z#{u6rV˽M-X]pF8t$WYROrr΍>qnu鿿޸u[oMz?M?4ɷY3w:n&|OᙹA~pWƎhJj2Μ/zwzj*o3n9tIuxb08x9,aq~h.W q:[z<^pÐBknLߦ׹_Ad\(w n3gQ4i+oQOۗH__C8DgAօ޼>4YSN''舋?0}ֽ܍PS|'*e?6xo6Au'8XIuzW4$`GP.^g XtL?;{ }晹O2mep x~Uv}/d=%C(3!pg^\ڐErA.fbCϾ㏐.3V]{;%vWG\A̋"&[ÔX9ikp`Rb d6@N5jz ,R tAnRem}>V޾E{].AТM$1ۈIFGBuuiF,bQL\t>2NuņG8 MesXhoQAJF}0 =x`LSA=qB L} ږ`!qF@ 1m{A^:X6׳R!+( ^cu.cCIn W{.#d*8a@  -:|c18C~5,s_2(R0d~}XS)1qq&0Tq Cne5cGAR__19&R?Db:n@Մ60!Bi VZ  >V:0HH n +pp* *de|,BQBeKܷR|# |6 Av ν4龴|/Ն}KwH?Jy(Bg2Q1AD=p9CJ< 禐Lo:?Ӗv}.2S3+/6L HC΅!>,ǡ9b2/ &>J1RFae0u )k%:jI `,$TPɈ l3p̀!!"JRQ!6K9uNVGdN*޿DM6/P &R" "eYeSG ﷣&9T*0VUGQe~R~@{+bй4!w4hzt̥CTB|5\Z\zs )\Oޙ>bPwcy*U:iz 3`- j8\|[:Li.]JX;9;d4fFV8$IF y)qzG8Č:%ҁf`M2SKW<ð@m-mf9Ao]|q0 oAvjrt |)< Q |yǹZ[TSOI~n6PκBXrUOϤs-X~ɿBHTN\/!fՖ[=8;$܍a12J‚&I3nNM2K/Ԃeb&L^7[H˗Z.-D$ZgM<42v\w)T&{&m\X'lS:gk&P Ǝh^!mxƏ*EA2낽nSC44u#T˼Ǯ 5ϥjI'5@֕I<,P_ٝ]/Da vo{NFtgdZ 3=sy(Me-ڬurt"JݑLD-X-f q1R=c@W-NJ#w/RA +] MHjth\ ެJPVXf8Ėڰ]$ B@oxF!?䄀gG_;H!@0I8{ȚL?3ų౯ b96>vԣIr [xp~. (YF= UzZ_q>QVRPh9f.m\kdOժP:K#u(-ްeڝ޴MG:xeg\(n}. 1T`M鵢wzԵC%et GcѺ\Y%:ǬpKJk#QXBnJk{)~,@!0!3̭Pbq=/a&³Jqd@9bmN7{ysjv.u=xwQ"?@wջ[:`mwa~Al#y]<6Ce ۭ%GA9[:e&4)er]tS6F~ZQ+s[{oVӣBawұ3㦮Vv7ӭݨB) 5Lbƫ1\—:\"n*bv7.6$XpLʹBzM~d)@ЇDbIkY4]_,PHXB 8B $*xʹz[ mne!Ew]HQ+}!iF5ϊ + rZQ;ݜH{tƻWik*o1aZۑ!Bw b:Ӭ[y3 9c@r ]&CZns Ej+Qr9 f*.X6]opjڂОl.[dhdXL i`3ڭ͚Stk[wq'*vfY{%9 .T飛'\Dq;5_o֢DZBn늕RI#OQc, F@R (tuM Q" mXh`9RP>"B((Zs[cd`nΓ1X|}cCFPEOH/}F8ɤ^ K4uL PN(\#c\u097/1 W:CqM)!f%L )wJ *._RsSvN3̧C?7P'?C]m/QՌ,k!g:rD|.<.q; ҕ1D췻?_k1_B!g->LH0]|GA݅=ƚ-zEoaAƈU}Qfw!H4>4zܿ.OwfՏw_}dc.z٭.o  XN2Θ.Yf@$%lEH/'PRBD~Ս7qU=܏QX=8%MCUs2=r2MQf B67f_;2UM&/9-O>-SOik*w']h#:FEq%d>>р#2㱨&cEO:F0jU/"F5mн|T(H\HKʅ @<38bʉ'W`'5U!cyW  qٓ+}Qh56lUǛfְ(O{, t~ǥ>#go rb#F gn )^cJZa{-L`Yh 9-Ş9ьQG }rA;XBFeVǙҹ̴9^Jӻ Jk"GR[PKAA/h`0(`(C17 -f ]ILSgxx`7aS`uT0ߧ <; 1qF{酵 {.WOchB-B9Lj$9B`.1VFstB:J1qZ*Q_ݲ4F-fG2Ƴ;}r\2_4A/<,F -7ѱӂ%%> r&KAʤz;y h3&GV4n1c\k Ȏ{ʕ`5cq},~VaՍ> = n!:•j<-#A֤3KZ!ZuQ|]0a!,dA3I4?]RH.^>*B1&=Z"F[%)*:IM H|.X&%L MD+8Ieb=9S4~QuJZ/KScxV}uy< 3<&͗@1*4pȎx͒F%kZOr#ޝF͂)%{$Ru^Olg| Ҡm'ΣW4heLz D(0|wF[W5(*jH{ne=qWh<07BapSi?3my+҉c8i~ѭ(VTYo`_hPfhܒik3˾\ŜGQamFo.~)̏~p,Q<<{X?.ZFcfWrz5^ލb &<{:pQ ^5vv3xv&Wr2}B(v3Q᫬aoF5@[bOkr{cU+G O5(u;yS*)Oan9޶)j,L4`vƉ[V#ehЇ) #-zyukʭdKiñBN, oGVA.*U(s&6B1 iBr *)=km(J- Z:&GJS!8Hb+xū8ueE+>jA n~oˬw.47gfŸQgEyL &YeywX@fA;xQ_z4?L|X/Ao? Ag(F'&Bޠ_I1Rl]⬤BO}o`'J:n~NoMܸjzŸn\u!z Asa f Dᦄ6mg.PTIkX%o '\g+}p @ZW:KYYP+w܇jJB0;z^|N;uVI.z}{=.kGv!څz70StxfnPFzP~Ic"sBJ$}!CN5s>{q>t:֤ZZ-x{I두hH%ރoo}/PC"޾u|흀3 )"y7;"&yLC&Nk9-]bRٟ#)fNQzj m7nۦM@`<!VL8o&ᡫuk]HÈ[K.Gi`N R&G'+KrZ+in2) ^ Ki-p\J œ:qG{ѯGݶFQ3(<Bn`XZJ7LZ#9'`]ϘT:;>jI7bXSr3(UXyA}݌ 0H{aRn0ZKse?-"nKE,~ro7_GTꇟz8-/g>)g't6]6TB5}vA<{c xS8-yY#Zhڐ} (lB ZQAd #҃rvr3Q͹ho&g&DMW!JC%ϴMQi>D yn]]/QWѪṇο}㷅մ ˣ0یrd8*l8E;+2z+I\䛍P(DUu˞fc gތt.Oçh+_.Kx{̆ܙ\d*>cXL!e#驤.A:N:Ŏ9/~x ~j*'G4Zko"p?=L%A??1J7sc}[$$>|iYU@}+4*`E&M"KD6"ȩK` )rq^ bpKL=mGLY,m] FQa+[iD#oӲ\tkJ0u;?y2!-WNsyjwJ vך_,WPzOJERۣ7 1巫3X{<+cPn{LKsn'v=4Gz=6֒KR>Gx5p)^M\et̹xDʽ*`DǠ!.ۢj$ďv1h^yDNr<0`HWwض$xcۆDU-ɘ%յM3Oj$=G]:HV{5N?kڕONh]-J*UWSӮ?ҵ]ӮXtâV 5]'5ꐐ"E#Q:{wbH rjOh C=Cw o^nv2 N1ٵ4Φ.M|=jt۱k=uC'PA=a HAy?tX?B~!~bN+BL*mf+z/;D pEV`֛i*M֬ėcUpG_4$3M'˅z^ooċwv`:w. #v;F6p 14Ml^A1yeZ&QV%o*YgÍx|"Cv fN^ :x+1GJD243p k-iC<~KSS)wj@+0=HshXNt~i.MRˊwovZ+VZ݈;O!Vq2Ն$7Gi=`1q1e!Je!J-UלK$T[أjM ^&CH<-wmHwJ2V$TEq-1W*sOURID8]Ǔ 4 > Yȳ'\\(kVu.2s`Ҭֳ;,ֆ$Dp[OZlv۟Ȥ 59jȅ ȝa%SOƲXxڛa)7yDX3l!.4d3i$>besBMQԃI|Ȍ._?_-:MlP_#D(Jz+ U}ku%O٤:dw3(bYv9d9&f'wחL_ P(i(9ELH9 4L.~X6XŶ6fwenuPrލY0Ng _ fDO l `Ա Hl&Gt|S`GSf;g"g,}{ JkbmZ=\4u>$ı8N|@ߎ 8wYPĩA 携P,FK!ʂ&R*Ac12$1H }d&ꀴTG#^(Lpk/T5L- E>+"]vt3,Nh(5o5fz6۳-tfnB ݑj{? A/ 1#ģ@$W. Ry zS>$4zRC&@0ǷQ!_t,(ItZ"\('&{#!:Ǎw"X`4RsCJ£d "b`a}),i Eo)S܉q&F{ okN ϘFǽXōFzj iB c4](ۣ Xn}OY?4ry,hpIK @8 ?߯>?<,>MB6#gnnDDpQ F'`Lg%z7z~7B]LIi%y~\ PL 0R 0>|{tu MuPj]qǢZâwpi$EOXo!zJIvg%,4!geh;j0o &.ɨĘ-2zLԶI%bb!J3ޚ (_vfo<_MeƊBYպPH9LGv, 8i9+_ٝH 0݁hЀ̏y? tK—e{3'0}ۉڢ  FMRp,cO gLP1(64D3 k ,r+epm6Dv+a$HD֙.Z.xK">`K( Kmw| w, / JHW|{0z8n[SLj4A`ʆ`̬AmN}ܿMypRvbٕqY.tp~jt G6ٮUѓ+k/ciz ϊp0/tlq{N1Bsw;zD[|(Sx7KvX eeLc[aENN=ŚJ Oׅъ=b80 Xr˪عrs";kE.vIuXTy.&I0_JI.^Vru4l.7:aKHSX7H;lr]{طl|R<=vzw=bN?ǥm\A/Gf 0׌ϣv%lh>gע47 8t\9־%dokpR%K=LYFkWaJK0]On`*>ELJz0))*`띪6*8s sB.gH=4ѽڭ~t_Osz-'ž6} 4TM<`K9 &kT=՚ɵ5]Z.YZ nBLf4iÃ. ;`jYO}%\m#'mܚ6nk꧚1~[ n}N9\+vV`djنZ N;7Z; kg%2t;qсi ۡs^gsee@ޭm W6^Vԋ'ztn Ej?(kĖbs"ETRΘ,)Qar )M"sbHcfy y=b؋(_I7~R` D/>“Шz9}s׼" !tvUHޯngw vJZͼ՛6iM!X4K)0Nkj pNɧUSEcꮭi+WZ X1"2 ]XӚcPT+[ҷ~RP iV|QϣߥI.0 4*lT77dz_?L3{u8GX1Ӊzf! :̬I"9q NS KJg18KxKe n`x˒LpӴ;8Sӿ]'qa3+ JZCo юȜ5DK^bF%AY][X9+0?M\A4{.uV<2Fm t#bpg̑w1]~VSjH}^dR6\]xF9> Zbg~gxT/j΋yW8}CQ" (9h~; j|\oI>|Zr.#x?$㉅#~f+K,_nAN/oO |{=i۵TOlLhfy89NhDG6$$,"^h|%)([;~k{=0N^0ZY-Fj5g8tGQ[~'-蛆ohq> )$d2R%Ս.5[ ݭOb~e# Da Scx~[dmig*PhYOPRzmYgWgԅFV,4:KhW{;1w8tAb-lr0wa#Sp4Y.O2I" ] lWR"==Iƨ36>Lgd߳+$>2r^ {.;f]꒾(r"Ʀ⓳Sx[XH;pCWx1vl{~en 'J $/G0sձ^~(9W4yq=͒foS$%F@߸Ӟ;x˝7'OTSdohƋPHU!R#q/U%|_vTA$NhT#_ z|U޵TO }!EI); &()#^9,R*VMXK)]`>';2LR~2$@5ߘӌ)C\7y: y:];=}XdnoMP#=h'(S]5i.է4n(8L.W$u6-²F |QVs"PFnudi}#tODQg 62f$b VRk1*pE{Nʅ6#pEaP? DQB ^tA)UcCy=, /? J-ۀ%1Uj50¯b"CL1AfsOA?6|zGXubnwO 3ϦySP!u kjHVj%(7ˋEܳ._4.xW)}l9(zxs2oN 3y+Y 6 TI{HK~l̕ d0…#*Gps x?]idJ6 Z.3B)!T0/L4MVȁf%/ WbO^f# `ԝqccL{3<-2)g~ )x$P6[K> uZ⏽_e<H=[CS'uE-yҽZKt7,fUOS&59h„ŻrR#6)(*𛑶J6Ƌ:8.cv+H2ؠto{+ JZ6\l t[eSR#lC*ybLLhsl:ۅBį^@B`i`;SA!WZ4NGjHlY^!a#SœLX}>d4Z#ࠒJ +Nҵ3Έg`: ZJus /4Csh>uP5Z.rVmNO) [\E#Ao|&Ɵ>LLu(Vֹ$F^RTتԍQs<΃R+k&4xl2~<9Zjox?&Kڣi)m!e?x}4/,oՃ"TX+Uo ^`pNC'Fr۵?7]dC,B7A$ӰLA'a^K9o< DDV^E!k8 ,} ,)׫)Jҽ }t0ȶm~ᯍI)k-_]Dk).wZq闞 0]6ΙJCF2khbaɆQ> !oA l*;iǤ6PVuQHy d`` w N'ﺐgɫ5fm6`za9_A)jƒ9,C:-U]fԺ&Tl5猍B4! d/NZɈJɺGQD# TL9 yώ3KBMBPE2 e rQ|dVx"xt(ٔTˢRj2~*m8KWt_x/4 ϻ/)$%kwѐ_}/ gQRr*R~'NxR8-}5pevrqn\>X'`q7:B).yw ="o@ u?yk_I] t;CgYHOB=w4fVk Mf mcI s2 +p4 `UK˽ GdFjj0 X)u1"d&aZJ!H`CG c`.K,>"IJA_%awxp4s^69i8Wg'+|ֆ2+|;?#WΆv% &v4>Jo0nR!'&G԰χџL[Ti1o:ӏLK>R1b;F#=V7<ٟOf+{uyr88;/ ql4?ϟ<Χ3ㄩ>N6˓77߿~꧕?.ǣOag/^g?Lgh7Ox~˟>y<}|G/tt0TMle=4ep~yg;d ͮ,8c6`dd^`|\=M3MEOqrL4VϷrL{?<Ã?<!bwv09DAQxPEAQxP拆o.BjhBdF#nyX+Rye\N*RE BPB :Y!{ػk_nTSTnWV#oXer:@t#3h;AjϿc~4sBaR[ ™g\5+x-*+m\8_e}LJH`|#Ji})&묜k s(Co  reh#KReb)^Ya"v: 0,* U0hoPOK2 Ȫ vq@+185 hM,AvBTdbSѥe5xC`hEJA2'!Г>,dm{Ӡq>TƠ_@i\(94nes |1O'J>^gs)I3BOf sE̫hfƷ\oPzCzq\bk..q>e-[!wHmpP[ NpVvͫFzH4{ lЯTTw&g9mJ29YERA+ȖNZ$/+SgOݴN_/XYM$y#2Y{k=p< gw>J?3~ UX-6}&_p&_pÌ[ڰgf?{&]hϧʊ߫󽹜KyFiv*ӞG1Gk,mTwP oH,5|Z^$x@Y_>%ݸX2FszrzR"Q'Udmb*eu*lN1" ֛Vȶcfی~e.PfIGV )J^)0>p* Eި^\C;'dJHge}%9zP2C׳%Jr\-q*;Az&I4^dЅ?iQ:S(_⴮)ra`3]UA|`b}k9\87D2%+!O"L1ZW^hd}qKε*8 [Ị dA.jjtkTRDk^6;EZ6Ph[ I\"6V+-eF9V;A2X:\}導Bk8Ѹ1\P#64JG^g$l- ' b55q,XJI2h x5DFnv2[(± PMM}%@5K>ŀ"lv"bI>DF3{@U8Me)Dkq,7ކ,dZ-uz4]fFV|2^dj{ligXc!\lMAJfK Sffil1DcrJUe3$W)f6bDdRL+U獶?FcT]U\/i-'OvvJ)]?I\Gc}t*׀-t*ŀYxX'Ѩ tEWZݩt_ީTMwkJq_Xi\ݩS]֩tF73n{^hq[ y_ALK]֌=mm!;`fIl|iV8+c-iׯ,Y:mlnsɀ$u_WV܃ݷsFK"6! U^mMy[HXfBHE V36KV\0,$,xI!џ uhv M Q8z6E%e!ևK[Ld( r=$+@2%?O]U "jKDNW=Gq]n\7CO3@N侐4$`3[vd;A,,$ vY:CnZ+Ij--i QaI6:Kh׭窵&c $J+ƮV E; ,ٚ`KN.2фtî] ղWoF5AA]}ٵu8ʭr(g/fk6wnEx[ȿ$w f$Lwf˕NuHS?1׏ ؕ1< 1>Eዼ︲\,guZЭHF׺e 8qK@cT\Jv P& 4}x['gAyɨG|o2~}q,;qt6C>Ln;{ ZXјpPHW-?kz>hf-Pw9KP?>^rjY˄xʄ%aom?Yu٪ijSO1gA9g ~]>$sv'+&ޱV_֬E}v*`/>~2l%Y9xǯ~인׻qq\(X?x+ӬyG];VNX/-TNQ)1q% ҄QC @ 9йhQvzak$F  -gZ4u}${7V\\K[+Nѐ[٨f]cC~gB`ޙ:{VMaI.{l#"ĭ_H ."˾Ǒtn[_9ӯ?[;2%g'ތ<}GzUA']t>?eG.?2(١*<` )R`׍J"zC|v޼bKBS:N:$`j}=lUͅoYz(u ፣«>||=p7TOhg|j~xbonx}ޟ|s_s MųqJ5c*VǕy UXSĂ/i/UxyM >Qq;ol_LNp ]2X`0],^ޮ.Qi ] Yyd%rcpf#vt06%}g Pu86#%W+řb@&HߝkE>[EF9/AFuvva)lWivEߑHf:n܌e.$`BhzzZNJIL$]pAz%_'SHpՔh8{2b^Lc Il{~A?sZux/}ggOIg"Y&Z9Fʾ $F|{K+)Zڿo fcj+edٍZWu[yM.yy]PlgKVϭv :e %ڳ.X"O5Dȥ&EzF%5 );pKHY[ ;Y+GJ̾[^{}2-dz)p[؉#7FBEg@MBs-QhlgMZ+B4k<;ulŮe Jύ-TCk}MD. 4[=y@d9fʄm+?'T8hGCJ}WKT:t(,11(U˹n7蓶.7-q^¾[zz`~1SAhY,ec{!Rxjc@ a3|E]9x8Thl=D<1O{]sނL|X׷rz[e`ڊ[BԶ/Zu,|=>_\7"Q+9hqt9h*[9hoeݧgW @&i{$Mkz3Ah$=0[ξX}9V$9]`ag+1[(KYRiI(9DSnL4+/duT()?!Eitr&n] *{i֎~m|o_?_h$I NW'I%y=qKrڗ1uoN/R=‡!:!cX^(TT9y}l5ZnI_hq%( SV>[GAH@H y;}eA)]d.pTáYXnӮ4RJb(@5@9jh[5WK뀤dzaC 2ϲ\78!lE?&moO^~J/×C;8}P0pcT.ݸ%jklۗ׿i5>_ل8-Eȟ/~p/(REJeIG~r[2{y]+}(fݧv4Wk( +%tmQ%7m@$xAn㷪'kz'M\{@uaCyɶ}NɎb7ǝ@-F)q]J)%DpvS_|.PPPC`}$[+?^!({8Ż?m>m ,PܴHJyxo旟_} O /?{ǁg'\ ^XR CXt{0c"³K@ VY|QdD&sha#0yĆ{4q's /&Fpjjria8{iyJ_кEe eDXR-L|pʐo Sɻ3 l-ZoiShaP -۬@ͣuh>,ؔJ}Z1xul鸋Vb^s t3.!wt =3cRV"KMw}Z2gݨx˭.ы]ƩhlI݃ߧ>y) )bIM ]OyI!݃ߧ f|VRsum¿;HxO 3oΞmWQL9:=t-8ߏgEߡ fS%w~f 2ih_*歌s򽃿'egs'\8A֍.Kw-0W%@GX+8z-)C5 ػg#O%N&,<8Pì/"ȈT`jM/ϫ@8T(TIʄԧۉ;/#V" v %yg=F$S0D4ȁHxsP~z %CRiyCœT&K &LSr,7TX<޺nԨo톾GJFSkúِaeS";?_J{zYbQ0Wa@-D 7c OZlFF xHӔ1]LBU>$F !P#8U͵h0Q})IYG4  DQ:PXT$luCh-I0؋-$JW  Z#I`βV^zOGj 1f pu24]D:N@ƂS/~r{R03KSUU v*b-e@Rba#bL!;6 &m7ҽVB@d۲,gk15ps<&&菤ikUufvKAϥ W%-_p3&H81)Ma_FQү\*%mkD^sqjQa% bcib  P;뷇߯?yWu*Vag=nVBZ0F G-N/2Ű'J( -_ԙ϶Wn;hX3HeNCivk&AmҒ29nL<4;fsrz?Tb8o *tC[+V.V=lSXMͽnXcLIq9qzy'dIZ~-2#Lܲ_Z)rgqOEhqѸ:dI fA+a7{x,ڐEhqEu]٬j吓x)K#8NU"aך7kr؋VYipHB%U81̃\4(SBڱ§t2zgꚭzUZ Q۷VqMO'+-Ԩ(oN7 {}rOǟ$~%vĸbkWДMzci ev(VaM ŸoK .CKҷϕ*.CK7VqY2BrZt|*.\͚?^r2h-w6˩yЊ(YǕsxXZYZeݍqF.߳`ՄrYi}"!m#?l7Fκ{yР(Zo ˫epq6XH2^כ:2ra &*DzviɬK*[r` .r`y Rh}!4H!\wEP4cAAQRy0;#hgdY3EI%pƲF/U5'ws vs4s+$h2rXmSCOy÷k?*ժ]ղ5ci>l~ t#te@?83X؟7݇yλ>eXzXY2悴+~~p?721ϒ{?1e-4V+dS\b +Y>Ye7,Yd벪8'Ӻ] v[ |z^۲ޛ$ 7t0wg^ToY*aq iʀj,;UgocƄQXt+qEܼz Y;=HbԸ Eu'؋F&5.*Xd}$1gz:Y0oe,OV}l^)铽2e`,_NјUjjPk'fJUe#~.[*ҎtGB.jԫɿ]M!|98M199)[) ?6ǢRwk8Jٞrޓ58_N ~jt/%+=v#=sbL:1lMInF4|J[^B?ޝȽF$GEOtOS,0$HHDX9ў?ޛZs: hp/?ѝқlv cg@(ӧ8 k]E`)() cvvm ƵkBC:,dG;x)CH~>G3́vIC+ەh{7׮t~QM(@$mp>Aį4a/W^[Ƥ!㬉wllDnv$A7lJ6EGԏvk N t;MpOyX-HT,פ+^#Q%&LMe2c9>Xxs?RP41j=zb=uZ#SS1)EblstHk}G-Fw߈St[#g f#\2sUܚY\1˞\u5A0BpwJ% \X=[#dy.G-AslB)͈^ſ5ߚAo@S =:oN XQp.R/˿?_ YsSb*ݥS)٠8 |> =| L@P(H&98#0ӑJt)aPEz-HwqM/1Iqv{{B!9_bتp{;O530<0_.˿~wMluǻ]u.K7@~*O +?Yצ鳦j2%ѵWt]tCoŘlºhynT  ʄoD#KHzF>6-AzӚ{{ ~H}7jcfvh#Y[foC -1aL=Ldd>+Xhŷ7~sկn.Fnz_;co_wq%,mT>%0H{%[Ղ^vA}b8 u +'% xUI/^џ./Lz8ux~ `po."b@xx@&8şsO^|xg )fOq|{ko.>I~ Vy፼_rt֞߬ lՒ7Ōxt`_yvK)UW6)!Akv砜fLX08Hʍ4 XeF.pj2 PVPŵ*'/ĻVu,ټq=DIiDKq4pGD;f*1܏,YT ,q~1~PSM&3$" RM܉o4y=B^0 xjwP @ʔ.!(1B y Q 1â &Ju1vXa9FQĸ0VI\GT+i\p * $4(:A!wso9!^p-,&P1"] 4ʢ!U>A'ۅxAGN&"h9З2 V)C 6:'0)ކdJ{wxbdg w WxR8@`#î^8gyBr V߯6V]RT ؉J4U=9~Ti<օkr6햌d4ؿ  \H Dsq`++kj\Rg"pH1؀!Pٖ5r00mXda{u~m@a{nVW&ܛDms8rsG;V=#lSgFn$3ӹkVs!<<]=>G'/IdRy*JBv!@ h;!Tx 7 bsF#,(N<4,W:JU TOOB7( !P ]jཀྵXn>WZ:4)<2hSx̬ >ވA8a eX1Q5،heZxc(]6)Wٻ6n$یU\RfS.<%eǑ׹TkHJ3N\54~n4T.#B9/qM!_Dx뉋yTA(axJ")sɃp1RV}I (\  b}K\PJz>)n4IJشEýъ0Mown[Q;k^6P$q}QyVQ>೙9yWH@0T68Ҁc"1obyC$r=X`W6r7rFܝW(w'ƿ.gzP2 o@=GYYV X הOY\@ޤ3i;P~ȇs}p1OiBz/).į|"Z$SrOV?հQfdϫIF{SM5-m{\\*+)?$ki y2}9'~$ft>1`75Nڤ1X-m64Rk(g3ny?RI|G|$)Ey=aLHK!yQi@9YL^QT3C蓁y_VP݊lʈJ`4jI)1PduIDŽ saª"K?I~M{'& "tbH \2ж4wN)-)0z;UzztU$,(@w'yT:hhJ sRaeY"JA:5vJWLs2}m| fNHK?P&һ#Ҋ]M?{;rN7ͯgbȣ~x vUG-CL 1\׀ly|a~U^2ta@T۬ui u漣M+GX~V*b0cJF)ol:%Ir Rq4搻6W9̉rD \Q(!BFc̕(-0<_5M熃)yzt*纰K,ɨՀG9OuQ۳TъL;I.xT3hLi0Az3g :xwbtJ<N5~1$mg?68>P*A9 ]l!Wԭ}ۆ|"Z$Sj LjjK%gR<^I2w;^N僣\B+1Mװx=MDpRKbE^,]2 ~1TPۜN1JEdG}/҈s7ɸ|쾅]yeY"W /$PAMgjO= qH"DI,FG4 T @EG" Nw|kwwM4XtQ㤀Tja^0qցB0OjΌػKyo$q{o=^R/Xazżg&Ļ+#*zu,w!(o`ooכogO`1 )ٳw[.b(&.A v]6~|6+`kq7/.Iގw۲`pW`O/o}/ 2/`[Kt?/ȵ/Z tSpu*ܚ(4ؿ[|0ǿ}uT΃DmWT7>"%h#P;W^S+‡p=[mϟi59}*yMLjMQq-#(OZC*CVԱ+Ij$@z:i{) tS1xI1U[l rl1#=BBUX(M '47"’`4Ե@ '5;B& "Q~Im@Qb&R>S4m$%tZ5hx\jc^fÖ(%a&KTu-4eaif 0X6уS±.BRRQY5 )YEl+$ՏU@@`;R?sprD XFk 瑀{o\w={P`{N, 7i&jq'j~)kc!Iۻxص3]9_~|e~Wn_@Ny1򫓥ud.q?o^.7DNȟ"lQ̽=S_޻N&˻)ͻ+34ng!<5z=1Ӌ |!tenba4YZ7Xs '0dZFY_;CLWp_ Xm=Cs/0DxOPKUdyQL-`!Pu$z(mN]f:!٪{s;ږ1/Gl F\<ȃs7;}9*פ|{,\iʗ%{R#hnq pP$F?՟0TE*6-? ~q_=Zvm|ڧHZg83+h1eZ"nЈ;~; ߜVb+zt`*ζ}ي/0s?UW=Z-dZp GvU `Hv*rN\s}Ouv ъv4g'ێ| mh?~vt/0Wre?7It*r4?voc]f|^Ǻ7si+ .:8//M;\IQwR r3Ǣ:en ^o@|"Z$SlOק8˦j T=S7%Id">\"')Z7oVw-W6 %Ғ+]x C.?cF_]a */_t3A[yzEξExz.˻# e={|AF[8*{nDƒ+yg [,2ψ;$!J&v7;4b{Nفa,O*oF$g`SĄPә/ `AG>۠Z244, '"S.AÅJO[AAWAhLq}vÔv+ GtJEnx`ڭxڭ|"Z$S!K7e h98#0ӑJt)0AX%;d۱q!ukF.|/yg if^88e ݐ%AzGz*8KkpzhPio11\7VP?`X㛗s:{X$ap'WWbY x1C>,:>F&G!-[kp{gǗxscFYt 04~p6Q#-UoHoΟ6\~k$ŐG (Bq*!Wl4ъ:〔Tl:USMUۯ/w Ve ]6;sa3w^`۟c gv2Od˩Sg8U禎-NRKտ7E?@y}Z$*FzS/?ZKQuz<0sԊ}~Ĝy c"5Bp)сwYv<(LMjN~ &I/_hhzvKP"6i@qhCR0.(B6J<Вn]Y+~vy#`ذ=c,x ꭮4;:L&0jU2_`M2\{u%F+,kM1 zulg7~h2/X:__;*YQɿcT D5nXMwPl C"V̪r^ot ,ٙAiǍJp}{g9f<$N$.8v<Q:j4 !I~Kbr*k`0KV=vh8˵)$qhmEÌUVr{89>?ӧ$"0rssw4b1'T+); ω֓c>^yKN+jƜ ^,'SUFQ5䧮ȽH$s/jD6 ޗ5(YRZӞyRW& J<%ףRn*zLɑzew4GI ?Dc݄z ID][XwS{ a\Xx8ug桉(W'm'G3zEC&q'n>7 ,aQ%o>/zu{*lʭ`hQVakl@2PI}ltFEPpʆ TĈ1Re EJxxd3\=|dޠd`ɋ"rOEA`%}=oJrISxځ!L}-@Y/V@JiCV{E|PDtm 8e%PPsl~~wwϫnkdSw>qw7v{nb>bJx4 1Ps8pаF{X 1hkl {.:.f+kL*ncHHʉVyCt< @$pBRx f&y)ٝ5k$ꔚn1gܸ BF6[a Z)A!6h#F[r'DtXi69CѶYnG1!PA&lE& LAM"0B%Qr !#|7ōH3A [Et rlK蔚͖/YPuê*[rr1UPRB zg:J*cNRj,^a2EEab阠"q.F''^H{f=/9Qb$(NgkF2fqϠ@ iWJ/;ːA* 08vn<c ۖD;f`ֽ1nWSa\'FJBFKZ-%lȵ0Dk-s6D,Ն)0|v``$Q®<$5Dϋ9W1CFn-jlŰ(OyŶgMtBxl{B:`sG@B1OQ4!\  '}[N{(~N˯;8Zz4/ZiSL5HIk65`/<P d'oaL ^dLX 93 6\DaC 3 !)n6Rc6<1i4< V4V ˁnRR/hDh6h U0CEe.fw `'1GZz&g)brwv/1}I+ "m4 VZn1tH5JI ;r҉y$m򈻐ivh{,#Q$9bXãL[6r8>X#H{#3q*E"pV#da dwWh$ZִCEEY[|lɖ/Y1clpm-T8L֭3 Us+4s^ L f! ()T(`ӡs2!5,X2\2؁Q$3V .-A4Im{RɊLiEx m1,os0)NǤB}*g ֟tS8i;z< .wtȓ̸נRYk- =UB7P>OU6ѸðݧW)U9+6wн^8͗-ζG曤Lb;?wi{4K QrlgOR j׫$ЇQQbM%%?꺔-&(IOA<*z| {pHsie uWn)M4ײ`={ mF7Oc^CQKqߛaJGWܹi-b;ݝ18`K5?$J`l vqA}v0Vތ.ӘBec)ԉN,y^ߝT:QiZnŝ|6t4&Xqa^'st3Kz4 B WIgh UgC4}Z\̡E9Sdv bJ,Ԭs}1i#~3ki{x.|Cen%޼˶,u[,,o.al0{3qu}v-l>}.s6?_tK}fvM0y _p(|߭$vt_ײzY0g,a #_c)87?nJV HFJJêʻ #!߸f!naLknN;hjߌ.O|9ARe 5.H~"tB"=q ŸcsD 썔†uL{3"D- [ֽL$޷~=bgJѪ&rYG,߷s/}XՔ $h] $N"\yhaaAsdz!̪O|3k :0=E E=R+ ' 00#tTL2keްIg^ Uj:L 5(`2xrZUC"7~otgޢ %fTkΦd<8`IlI *5* O- M$U6u2I~4#9\I8=sU0Ub>AӅƶ<ij M BBq͑)rn'~]&;~ [.)>mlᱵ[6m E4G~vÆHw9lc ODXm2Gch8jJÙgt]f ~ʼna\o^L FP,0ӑ"b.d2p'n=.ﮃiWlZܺ#Z?gs4YGEMhaLyo$FéGˇW_G˔'\>;xqO C*bp %0n\48)Ox,;=!^{0 ׍SYljOy9C^xH pk- {B<0qS0uW"Z7ZZAhm Oy k1wh0kUøB5E%;' An/7?]2Nlk"Vb:D1tTZ1$e*e{#Kr:ʥ7!لCO3޺0 Jj*=s7Rb&-bWX3t<=>*"ųo.0.BbPa 6.bǬOA¿xl,Snhl'!}=C a1__ۋ;6: wcpL._6i|®6a]نloy"7~Ψ>6w啇}+E``H10f|)2%F{dChQ>u ;X #36^uJm*%Ո:%V 6_*]JRmzGV1lS% 8 DjQH_Z)yf,@JZTeCg\L?M)98&}t(:}x:SA蔾v}VVڭ\VCB"_ v\ v:x *~`EM.l֙W46Ta1k(g{VrO~?;wLrMЀݳ pp*79 z$-UHʑjOLrJPns V>Y%Ǭ Cp`,x-xx*TRn6OvSix:8X6G e7$&Bca#=6t8(QZAo0L4u,;-_:'2[BH(}*}BG(0X'ٕp gExgɰjׇ M9jR@e觍{fnu(LSN7/|YUT$:9kWćV< s]n_I"\_[s@u{N~i_[>T 4`(praCC\-RAZ# A6XW^ˠ,/;y)#<y;gt%Zx ?)~~2-22n>`]*(cYl1P '&7/8@@uEu}kc"8gauygeeU opv)To361 I &xj;H4b慎Sq!8FDʱck2`hVZe%&ܸ;,^R E0*e=//쭫|Aa58|p MI[XAB]EvVp*A Jԝ;e=4(E2Z$DHPEx?ydC[`CnUHqN/ Lz F Iq \ !QW949E7 gņ昍FX qʂG9ν"QʁX$)qĴk7)g?-Mhڊuw e{m QaLRoNf$e+[% Fz0þd iPfjMn!j]&Lq[33j΂3%J4 b%rƑcXjtipDg[gKӁ&lnZk՚ǘ-(T߭TeJ|TL Ğ;Tb.3SR?Ϛ`4r ,}1Oy$r5څbBs/א?inP<VA蔾v:+*TP}ޗ#":֔ T_۟nSvvIܘ[dnD'iXH@@7`=,'<[}Ohg,#v4H8$&&\%HMa^PaQ8*"?^hVjD#bRW8^*_Df Xj/A綹p}jgYr7#0eIc _F=hյSMb>xLk }öDfrbcV+.?]\|=ztFGqT)=GG9=wp'([ߣ\~0`!gUܸg OBəSZᡱC I⡓YH43)/,Ggs .Ϝ^/ʙ؉. /.҂ 3̄@*xn22~\zQ%!은meN.8(:W:1jbߠThuAQQBɉE~7jZېP h@ I$u IK{_ J>c5B(5(zpP*Ìy>[ކrlCZsn;m A&`j(k5PVf0V6h,Q J`%e J߾S_ΙU;6:ۏ [P-Mwr*$^Bhg @IOْ9o ED8f"AT_`,M<@5(Z!uyl|E)"(5HGU)[SF7 _T[!jh09΄+n6kn(M=(i(Mp•quNM5FDFRT2-KJxv{@ѺҸP}ErS7s*g-oTpj\>=aO{woI2$<蝍4X2aȘ yCP O6(Y0YAxA~*j#Ow aX\ZX;&(i#^9,R6Iɬ*ѿ?VߖYXtofd'{ "DT(IRWAO`ٟÚ @z^ZR߇Qݻo4n4ssY#k)-ЏmvBU$"&)}:>PI3C+%VH:Fx\FG+(^2k$X3i͹ 68p& 5spڳ  Np6$2#s-EB৮k{|gSk{=ԓԝ3nkꍥ`0!AU:e)Ƹ"ADoCk)OؽAYSag*U,g'/|Ą){*|F[<(p0.HP6P/Hj8[ ɂ`֌˚M;ZS2Qe)<ӎimLkQ̊#mȜvJ 0u?l+D<^Al l%bf̣ $m$M廰x^]='w⛰j_\^\`̍w@6%ptgQG,[0n6 x1SAs;[i)+<[ژ9Xvx|y DS"6i68E&̺y'T!^TxoO4ʂ@3uhl@2|esPxNA)쟨qíE uV`q`WQB|j6ДspP6 ,FK "xwBr_&> 5HN05IXYP\zW/Kks>K< \^yI/+YusL|FlU'>J"'ְ[1VV>9=G5g<*f 4YG!S_ / a/x ڋŇE?L;le5#@vЫ=KlpBjOg1_uhṋ]_Ü ?,dlH1?B Ra铋>>tu6X\!+t"8>JX#X`d&&< +Ѐ_gf} eo٬¤m?r۰nS~/ $-fө2}py2FmEQ-0&c̾:<h3ok.nǛaD*}6H chﲁ+/jG+b[(Z*ݣš9s: /oe˜2Cj-٭eK٪5(˔UV0A)x'ϭKT.Ct]83&y[1@@MhaLyo$˕5 %7zA RCu05pX>Jg.xw6.$JbTJ@\;<$mԙes]fHZum䝘ܺФ"GDWʌe&3(%c!5< sĔ KMfaFMūBctJV %\ HIF0D+qEwR?عAMe,VI3,%,[#ɳ@d/IN.݀ڙ* S0dU;;]0HWQ2Pi8X%@ j]WxC_Cj`n WJÌ-Ja8mP6H "2BM3X`܄Bp>+[ p˕НPJh8Z =oxd~\`JZ8PoNaFb"LAC}8LN'W^np@&އRzAMX#(dHL2z]KՏuZz ޼އZ'uk8g k:<|?0sZh{[|ÒUpdY4=? dq`L갶auƚWzcMmZ:kfN$y XRf4mr7ގA.۱O+uYG[!:ma~y1wRg"p~\b'Τ۲{,9L6" >/4I"&Aբ_ڞ~86BvdqcT[3=K3=Kn$ConwXteî/|tb#(fR*$hW4Xre0R, L[-C=[wM߭_8oEij OR<{[F8'WVp[T*P0+_7\O YމCA:GCd q4%sڏ#!y kQu>7\VZXiY!E),u˲h\g3#gnPLlMhnG*[;UY$R7j$)w0" E L1%PfXܶmd;ۘC~e/vQ}tkXj?, ~ 4b>S0SSM/KN7ބ Mдh j.(QaTb* >n0(Q}O Wd"/!DMj _f[ bk])$J)-A(EiPHYU"BXhGbl:jf6Ҕ6nZ%^3O^IS(TH W(D+"Aȕ"1R. |Z>YGVX#k{rG;OZۚ?&S2iK ,JSˢ+li8"R2fs .@!Yh'4 nF7~f%Vh<>hbq*BKF'C8FJX`+lYъ1 +J` (@1L zi8:hEu^5[8?Ln'fAfoHߎoow~5!2s:ߏ^/}Owye}?vDa; iq AM Rl!(c3Q%v|QW)s&1̈a->vH_'Y;DAϿ_t}-$ez~?5Fjqۉ~CuqwA K8Eu2{絼5+~fcߍip'3tsT5q }0oeY-&rGeMG VX!@pͤ߁oOGw)!7!0rC~p |A|q3zoq||Dߑ/Μ o/F1'8DFj29`G| NP?.^[. /5q: p+bx?7S5u3|aMuq0W>܋FvTݯx( oЏ`:y.y^hs l.KW 0n={Ts_Dog[,Ἧ>gA!!#^  w·UPd jYk>:cIL!ITe}=ӏjkEVn8yRyD!pWNf)*Jwzm_@@ɎӿG/h#k1(|{A89gف8\c.#ԄGjBeYǝZ )2N9`QoE&9c[y c g6xb26g^{}9oeƉdEk[nr< :y #]Wp{;9b+Lv SԳh#cI }4F7?Uj6n vU4Jo mm|j@y~q3v6;m7u }hS7o5w%ChشbI`GTv!RK*ؤ:l-e26[{"=)X$"$tr6ۻ{Tzfc}|쒾{/<<稭1e$1Yew]h n}`#Xt] ǩLz0›VWoަLp@[2 nxn (7[@#joW_uxHGa_pmfPJ ]]qѥZ:MfK58a*}\ ~&r;NTut.̮>: ¥ط)|%wcFq} a}Ɲg̘ ZbF^rM T: $O~=&ztסbINf&A S$ PY7`JShH+&V%NLq#aE(Ეd!YVIK.+r*Ei. Pt^M @@%њgϪYu$SZz Eo&|Ey5n2d[K:UKt7_mH 0#yuigE)$̮6\st5^CE@{߹QfjWθtgDfUذ/Y&9gƇL6+!ٯ,d3%o5]ZB)-u Lhg+SEVگxCb<]Nrt/;r9lb9U)Ʋ~Yl"9ݴWt+nm~h] nԛ5C$.gQmԛ3Ќ\346,mPRwi0sd<Jl2>2^mvMѐWhqgi5ڗf<~;ؗHNԡ!֓uT'oSI\ 9S9ԛXla}VݯBABM_P0O?#Sa(QcQ<04dTpD88"?FBDCJErD s*Hb@QA?x҅, 1CŽ P Q_/a4kBvΆ NI qqbCAm+.4bTp R J>'䆌 OgyB$Y@P0\}q!-Ea5!<6ЛɟKۻZJNģ=KV"CA<@\OӁ2ρ0`].ЦfA :[P  fp&d$#DW`w 9u9tN1KfE@Y6{g]Hcee_ً}E 931}rf:ŭr';U `B߻Bi.w0 @!DU+@exVݧݻ_sː292r>j`W<Scq]p/5e7UޗӉ8Tug2IErųmd wVKtEm4 #3tJ"0C!z&tQed 7L(&"|z…Da?ğaܻ zќt{6mWD:eP Y>ng.}Q2ȵ.Y j,u)$y;c7={(xyp>̧SfHx|\u#[!,(#zѭ- 7$O5H&[= sFΐOl4_ΛJh@}^6-L`!wB" b5I࢙(pj ?Yx DD>gF Q14qK1X'D: wfH * i\ * tdIDPipzE+~TLz K4 `b${{Dp0/ciwe/v>m_ X:Г_00^lJFVgضY(9G 3N;;+ A!c w}S:ZlP=colIˇiW $ؕ6W٦T[6En0' O: <+&i$ywhc|4s`_a` QCyfF^(b`OIQwI #zMބ08R "%AI`IّH_ kD׃Ř~7/^*+Dbd'}}o߼+ZqS7+V(@Q$F_"V+Ab p]rT1 צr(eWI71DKxTgzyB Wj75?~ Tj,Oͬ_-泂jRR.${;rޫ}3~ח>\;Lꋙ-e8Fk&BJVR&D*1EȔXUҕr%வ@9K❩!տKMf<ɪ ]h"JT(nXA!U}r2n9U%Wn\)TAW- x"a m(S/҄@̓$Ě?m.,<=p+ًs0!_4E-ւh[j`H8-KRq 4LS\ AR݇+ρ5Rj=rCu7T{*jtK+{4nYYhrhk|X"ݘe䀖DC)J K+6`(=I7A1!z1,we$GzYp>˺x^h)qZ-jHvI(*V @y|P(4ۮ >ުY޷Ui]-& (ocN*d"SX DFP#ԨZ/%II 5 u afu)*rPY )) :`vPI;J9PGTTai\><<\MAB\JA2VQ0X3S le6ԁ%MgqZ^+׷|K@Z`0 8n,꼶`p f@: THQG\` l4"2 xyY ΀Ui()IG$SI)v! `TF;8KӔRtwˠaݐ>{q;ܼCЂb 9츑Hshϔ@ɇ:`]cVM( }^ "j^ َpp&,c8`XaJQ\ba#k4jaB*^m3S,hMPP{R \PJ 2wY3p5Xq9i;mi8}?]/)iɕ/U0o\e6O;‡O+vOW~2Oͻb ޝrky1XP:ƃs? GE Fo xi~ vEksi3]ڸkYRa"b~u8䅳hO:Igt~T9biى?rY0EՎwSeZ"D롸s2s B:q'TZ,h;h+C>BHÖRQxr1,CH+Q~[)ՃXwjPSMuWcn~WeZ25cRf0kbc0J1ѰJb H?oTr/L#p-)GfmI7t+ uJ"ݎ:M[n ͕n58䅳h OV2Nҭq`֊W Tz-ﯦ3䄌6[];vI^b'0jS(c~,w1fșdsƈ47/ Ə3WAY?]9hHKfEYXµCA @H.FE‚A (BQz4\H,;P2•̚+*K|.x6 f^j$YsQ+]p*dIG'ޣѫd/B0xŇlTLլ5dZz5qV-cjxM@SM-+μx!/E x9~rK=r>t+ uJ"ݎh+M[N}KJYZÝ[q|UZ0皍Rd83:$]H(xp$|n?~$[]`ɒ;睄j88ɶf08񨐪 .@2U26 ,ycQ9(0f^G2%ޥY-HEG^MFu8n8Nkn9&5#N=tp=1I(TWϷf-(K1ȱKrqD h-yQC^8D|JۖtQ<@bu_8F)@S6B*p g"z),5x|9|u<׼M+*gUKfT xvyV|E!c+R]tp=,=bUp<wp/|mUmATyW"u2<vUݙe[G &}^cyl}{ݞj=Oӛxʗ 7E !Fϑ~UɖJcqerǪaj @8g^1Ÿ{a 1 5 1Kk{Tun95u9txx9-?( +)TĴvU >W4߼GppQ5cyNs{‚/Zo>9,bENjݴOfyKL!~ D[o"B pjGu4ȥbXR/=[RO/L!Tݗ= 5bh(Bz'[zux-űxDJԪ_շc]y}LǺY%qHB XpNx[B[y,ZSSkgR[m&J+!Ot=' %Qb]6nU6Z J|Tw [ 7f0O VM*)N1Pdu )"(GRO&`֒1zu͐.D!L8i<*d 4`K4SZ`k* qJ>k #hv Q Q9\vX{{ok]qePǼ>br<^1!;6 f#83t#DZd1 DFb!7iRAGJ}.I1'j.6*ӏmv}xX"XB@F9 EJa^:c@CƁAq )R0kqˀ֬Q( +pN9)f$޴{IQ8䅳hOI9zrK)F6)t;MД6xҭp-)FolK7FFQI)t;4eIJYZcl9WQb:ctnǼ#Q[Bs[ y,z0O1DT+Gݡ7=~!L?mڦW!{>J3kyH^!14N~drVZPc p"b 3+ |FL4HERR%04mSns| jS~;|MS;>ty-0 ΉśeE-yJ,X++1 O5վG:30.n\Ue8kd}N \^]WVh#ma h$cN8j1M:aDfM͒UswOՏ͵odpJ 0b9T7?a IR\͝P/c:9Vʠ 0 M<8BR;;Fw*TTUTUZ;Zr.bz~I+sػ<`o{}͑d y}, |Bp `TSRk&7Nq`p)y80ʩi9Jh1X8dggޡo,R9L~ ՍL0C%'qڿՆ(n Jp+5l=5l- Jt:|zp,-esޜʜz7&p eC\Raۀz+ O&Aiʐ!R}ɬ%^'Oh+Z;T'!# (Ӳ~=Lh{_N!@j]TsY EC4#H1_^ .`E_@'KZˀl9n`Fy87u.oNyͷv\ M)ϥ;?X# ꍾ 1LoxJp34n2sj'ARvuX U4;$VUBi;ьP5\O4kփiJց M7k,VH &9m|7d ')K \`\c46 V6z*6˳f~z5C&U4۱K/O<1AlhLq3Cv:< C5=(4LPXGX0RPc`_ӂѤTJ #0uEh,{Ҋ}CÎ'1hI[ݨh#F/> צ!'q=r S4Fzp cpSƞ_|N5Ohʵ?,^UkOMtc6Jj>| 99գB Xc˞> ukHL$T4*ϟWʏI̮}Ϥsu0o1^: :86ʜ@"őHbRpLi0>d ,VNy!Yu,odū+V7FI;&!Wo߳y5/ySRd_݅~0bxv DRelTaRetTi5zkzkLGyl%K{ݮ y ɫ1~ ^hGfuC9م*JQ2_2:pjճ2~L+ RE6 A8JG<Ł3l{N891f}9kg_faO:Q~-ht0]d3ػ;Y;h%|P~uqn?1y7q?"PE}z<{tr}]g7=<\yj[J.l}3FwqKs1OR88X95M]AUZɻV&~Ob19^ٵ 2nk%)AV~(t8m(.]ߤeɌ])o\(w@!b,j8+eg~aKLT 5;)k5A",ooAܝc!})Ir7X3&ǫf"0 vB`y0י^o5;9*H[apKg*ay'W_lO: HG ,^iV>$(;wPY 2?ydZu5 !9p}9q&J2ɤc8B<]CCFۉ`G{c7<4#~(yQN ~"I> {z2t=SAm%O9["D :c0Z""QOA} =qVeI#"8QoP &eμ"KYRzիH,(4BQ69_9G S=`MC[:BYp' PkX 8J"OX\#\;uєa !}Sd MK2&q-Lls'SO( B":La}F/z^%e%ZFȊ34"h(L5@ J2 Fe& a}8 orxHZ_wXj.M~6Ƣ-JѪ`jvEI3h۴ z՟޽P% KR[BK, `fl\EpG5Ȍ%Fc,8%آB{oy%UcpwW .ba{pX654y\41 p1!\΀7;tYx-m@@)@f+:n:?7p;?!5ogч<|ap&e@-O.G !Q!8d1+Uw͹w7Z=n,n͹V(՜\-Gjԧ#0<)9p`Tհ= D8|?}cOv]=9RC}YAd¦N_Qx0FϽjg.՚bje1DFa5bl*s$r~~P&瘠SG<ȱ}ԝTɎ<.*ڂ }XtC퇃Ƣ0_-f'\>X9pۑz RMUa!* V[&"CfH(Ud9}tA`ORI5eع>>M+:Oq}^4҆:Ҫl,i`wߟW@ =*T76k4ƒ+aeH l$(Z!ϓXx1YLjw5[6O&%2{G߽oo^5c_~r7{;I,-mVww6ptb2478 (:HQdIx&M9@M͊7Ż.5Rvq{]y0K`s*Z^hI?1qӎGc<"c^xJ0BZxyDwP&uuwjm*mħVi62V)W%1RgSBi͘CsD+F;l2r9 Ejavz Հ\C)lj`P6h,Q J})I%c֊2*;|{V_GG)Sg?S$V{X(\X4CY T+/1P`MI!DiĹ! 1*(" X,FP YKL٦H Lnb糫I:R *v H_ͯ]llΖ% /n.=L&Y0yny1ەNNoA'^YXTXN4hV@XUV__ov7gWVAo(<,kP.VWFB^!xfȖv[RBedJfZ.[.[ y"%SnaznjP&FPK`xo`S'h(Gs@@3PϘ5l蘗;<##%j,,`(V!Pqm-%`GGis ™wȽL.hJB!߬A=J (=4RiB22D7JNP$(Y+О@;f&$F)O3G=5ƀ풒@ \s FޖS\J)! pjw 7&3þLܠ`sB9LKd(lThg Q:lujM":װx~f9f<GXJez^GX>>i=ȁ8E1k6 q`%b\,@j*M6i,58:nvAI*w61"IYL5?aLp͒}w[-e AFqN֒Zݲ mJH kC=n%& AFw20nل6n%$䅋hL =mjznF |>X4/ +4JVNjK{wۛ9h/ã_᯷vRݳj,=]lHR T&ab E((ZY>d8z+Q ~J a@F!|;RZEؔ|C:hCiXU˃N٤ P ¤<$kimdI1j |uߝebA?؈wo9<Ԉ.y>m:u6-6ldǩ16N'60~Raw~8GW]uL!}37!WUAqZ+;wl&nm"*$FeY x46) ͶGWR(&N1s0}o1ފXKg%re\[RTI"]Ek7%n[+1jTIn ZP.ւE;,ae?w+7Ƈg&IXkƠtƥ(x,A;8xW36?2ƐYkMY|l3m6 4S{S q-~ .<ċQD Vc/Go:/ˏW.-G-{b0RN~^C-z`JϘSZDd=s֯&!8EWq==O㒓2tϘu񢯠wrɩ!8Epc8uiGRz=YȎlG\vd{`7NŜ=]ѕ)ox芟{K$MB}ZNܪM9Lc)NOE N.5R0UNax9;K?Ӝw3Kۻ˱OcU_{pQY1~rxe1pqfkE`_޴7sSecRܪ&1jr˔[ :]Y &)聃,答3Ϯ5TyU{毖BxPA>@Ş 3ib&j >ܸX#fܒ&?}xE K5"yYCЈ[gO'!C+#'2p/g~pjX9;]*;Zc@5. Q!UW* rTC!g:W$[+[#Bzi^TMzthPW쵭2OړtOT+H'K4>FuQ=0&Z 0*w;3 )--7,F(1{էL>2z} 3ZZ~"ɭx|)1y79GgP{4]"[jڛs9 !NLfb9`liSڮNJP/˙ڒ x^)?0rFY^?^v,t---su֍#g"ꎶ>|l/m/m߻@DH}ojca| .l7f5~;t8q=:B>m 󄼶h9ꨊtzuxm,`J1mj չ[!&BCq;MOJL3f<^0ClcRrj=0*Ni8uUvLR3fY#fpm,ןɘjo;m,EUXNyÃm,N?яVY[KmPvӥHib&(}-D{fSwQVY'RoVI=.c)VNK8$->tE gs=G!h%kNu۝gv\)>?B`STR[(;9SOC-G=,7|u( 9lc+HNY,l\bU.\=U~+ vn0h=1/t-7CUa[Sbt(a:n:4h.Q !,dt%f4x46BF5=0roa/a2w+P zjMJ7ˬ ̀d'&3%]ZM1$lf :@3yM TKڵ&[$6 ,+|YS\ My4UMkfb^wst†(m:,yt3cCIzYm1BX%3]'~4g@>hyPi~2O<͛Ѹ g,ǣ7rv,ER-]홥(XA,%RRqҧP_\Ft,"yFǂhQ"KXH\7c=n}Pcsjj)-9JVz/8cfg[)0%DW|+<r 1.R,Oű\ap:hm%}CtɍX8~(zrnQIZ؛I8kWB1@ϖ'*Z!3;(Tj=w&p81sK+SR>s!f4FWHKWqXW4MF6 B"0+9X9vЛ^΍Kr=ܲ_V!6dL±<[R 5$DUOvL%Z J ]b .G=&u#f" @c#\mL61M9׍5&+PSXT Hs Pθ'f O1LD=pV ,[>J: 2`ډ}Ɲ<+$ե k.i4`Fw̟$Lܨ{<{5lNffXYXT6c4BShbί@Le DWMŵtp 8g J1+"Ws)_`u̠xW0xYKtfxMeT?;uzom:oSt/?k/pkh77o_}=^9^ƿ^y;Z7틻Olzo^wx?:cnbZ r_?_?+"5afxqnu$#Dי>`ma tAjcoy;~܅O?͹4KV?Kݛ{;V~M=ze1r^'w.]˅oٺx˛\miZkk6eOljh\TINlOJ،(RgIvN`vEgg`nKdr8F_/^{OUТUb~Ye]ӷ$vFTI!D٨7fRH+AnNu0-WL5.* ȺOTv' ;:H~{͟-5U(t G:{uuO7놂֌2x* COpSf _Kzw ՚ |j q9/*e~/܁xߋʞ__bHV4l$T%-ZQ GrbT7x6Y&)<=n^_I@˃L1#j(Zl.3ve<~gk6O\XEk)cOړlZ{`a%Wz4u 70{@AX8aJ䅿[Յ(6!2F$5ش O؋U_o󪢲yzQޮcpoo[xH@~go+}? Xx2կ~8O%SBOo'Ωo6yw{qcln`9TKT\'G}oPFW|Zӣi*5?Y\mҽʏu^ 4-ӄQ*{_)/ݠRk..y3s9c6fݪb3"SU/@U| i^$5aÚq S~<9a] GNӧ41zipľ1%z2[\ᘑjLχk }Fbؕ/وS?`t]Ko%uޓ]jP?rn=8LN㫕 0s2 3 r ZN6(H.0կz!%o~kܫll`ȿ5ow?_c'.oycvq-IZ[}xkzt-OwQ/l5㝺l>7L0# }YJ>a?qpӫ}r^w/Gqmud0hPpt &$L\]ȿ;%G$|]姵Ą7&tuyqKo7.|vŗl< A|q@P&eH]@e&3!~qc Jbk7)긽Q@-z Z+SR\B%=kHJON'aO=_"[hQK-J-"J>hLi U$tyT-KԚzPEƼc1> 55U)N)7YqAY Jz;Vyȋ޷80#it)g7^Qv RcUsR TT`Xަg>ec:nGqLYur.Y(jhvj \MfhF%'M"5LXVui,!#0V@B oX47)s wvw-]#vaI%JkP'@I]hK6G۟$Vg_nL&2RP*,8}&iC KR?t9Xn5ʜeZP-qPpq¹3P ZI!ɚ| C7T,y;|Y, աm\!fO`jQL2[Xз4FȢ%v)sg=%yi1Cπ cQtB $\iZP~jw6klrƙAH%sUs0o/M$kF@- íV. ]wQR@4LOOv!Vr lDv j4jwv#o2L s)%$Ώ K36DeڳDq%/ûK]6>L ï8?mӰ}m/3.aAך,b4  uj8GD2HbxM>#bZlM) K5jA+0 J<$T3ݩ P58_wArnF/,:2'm.p.lKj]pvAOy*T! l/Nojd@^-qugh\w]/n/JH६21|4}Y5_~~@/+s]@7YN|QNJ?IR޳cU|8vgm=lnj\Wzgn)6%$U'nN7bۄqөԣyfޭ y&bS#m&Pbb:߈nSfRүz>,䙛hg@e,a{XrCƒKI[ge\0%7dU?ʒ0恺X7zdL >@Vx9@C7· >#A ~&/Ž=zHz<ܳă3yГc#8{ ~ozhV_=9:Ni^❕}8ۇv|o+m^e׿xY VOy6CU!{أ5 o>L`|D]=$2 Sާ}롱x$}rG#ʡc/d5x"pϚR%G}0+et9gz}R5Thh¬CeJ9Yi%JJ1pG2ᚲ~t/B=8gMYh{h6 AHjSN*d&Z)\)2mԩUVKKOhsJa۲L$z|$rXؤů!.w ;| r]ⴴ̜Խ̔1pb֞FHy EgB{-, p@*Hf7*GjrglnZBѦ`#kyJ sidd9QSInr\#4<ϕaBXCrJN#ĕ}<1><$bA#׾٨VAe&VdB@Q F2[v9 !,A e/ohܼ%H]0lFIVp YfP b& ,r=o А#RlZ -SY.2)sU.44\k2,SBT `sRD1 (+#\XQ2ݭsQ΢bO- ?NJAVy%Q?2EMB+QBf6@#*G,N$ysnܚ(ȱ.GK+U{b`u2.ktљ4㾫B .gRRja}bIf,]WE2&}_rrZ;]7>xI5K֔t3!8؈]%2%D4h;nHaժ. m9YJLEyg8]N{5j\ /梶\DKO;ۋhpyE~7֣!li{@({#BYUPM6RsbMuwARoFNrB'.5J&+[8r 9ےZr>T BHSC?th<_ }M4Ŧ)[MET BL'1m锍!-|hwB&هͽd1j?iP+6^;zgn 6%=3s`Y/YSj 3Pm{RaVZI-$YVJR&r}R5JJ R\b 6 +'YSjq?d+ebs+I: ^R5vї>+X+X폀^~7/~X72N'v@~jXuC`7*Ѣji#VS*%~[*a_&^%čGO>kɔSsFdڽt(`^RKx]th>?rh'jDJ!ଆxGTPa6*GYW#rCآdfm>JaJ*%Yk+!I' VɈ&=<MMZ)k'Cϣ)}.~ }^> <Өm&+()lsl(&l2nFT|Z $ݑ sY(0 ca뷪ʇ3d0?:S#1&M6]{+^*1gkBPŃQ_M:.yKw/n&q\{;\h(TODu+Obj{"Dc ؙuM{ROWьWwԍn #!u.*ts'dB2#Qtl \Z1~vy3J c(f\seY!Y2oNI8RMH#tYRI!!lڬ^+! XݠMl[gOvjUT]whո]NjiPT{!:$> hУf(!AF=í`|]e &0;.B~y| 9=Z*}7RB]j)6껍~AוDrknJ#H昨J2> ( h!tgbްHFU]R0>/ץ&│9 8+DGa⬴ZUQOVzV9ydUVJ'+=n+L!(6RIejrcg5c%{9z? *MN:We*Rd22Q*xK}.4yn" M4R_K8Zrg]rzڃ=Si¯s.ڟi&?]Lf'b;g/R]/?> ʼ^-)W.hfd>=pe7/=>,klլ#u.8Ó7z 늋q2n D.>\||M6J'ꥒ o^ܟNbH*~+?Aq̏8|u[9z Q|-:@kyn}r_]C%ٽ}Z7+ W 㥣 zHV=[Jp?Qh:SWZO]̮LQyϫ7ݺ5 q]J&<M'ixoR3iHSΘM +sFhHph`;Pr`)ޕxn57tćҟ׹J~zm a/ׅPɥw@XVcA_hx1 AL~Mc0~4ݰo6M)A6 !YFNeP)BOdcBi'mzD\L4r5h(C-Js}(w. SDfD>)YO\*z="Rq6!iCtmY }wD/yC=Jnp/׀7Oi`d*3F%,AQia&cVk)$5幑!^DN$R&Ip#4TFbU]~g?_-Iiu:-Zݗ˧'S9~0FJ/??B~ER?mv Дio5_&q7O-SI ˓`FF `&y\c{:>SRJng܏Xrlrۧ;0mDD!Bn`} Cg_C/}FwK_87F2L5a05iZo8 _FQb\v +RAr0{?9ME&yW UU®rUU/)ir)3LeM, &AʄҥìάIVɿh+`_.&%R~}lAzd44 eQ|dɭ_ZuHS3!ӺLMya-SB3"4gdc* ! S.'SS X?;ټGV5_y,#1.6<rI'Rh0r'Xb@&$)㮰տfR79FhޠANޏ}nj;{a>/_G=Gy27z^;wa]˧v1b^O0fcm;uZ\`i7o0EtpUx}@HdcP VqKMp"3CdVz^kah*̣ZD4l:l\cϛIvsM*:t`$5]^d}c-hM&ud_ AʫeCB~ JȧS簬,UM>oGE^n>xMR gJU{aOE dZW2HCx]e= ; KѺ.S qia35Bqߥkfc/4&:Fh`!޾]|6eL3 uetŁ= Ӄkq]"{=cwF͈c=dt}AZVy_xqLEg{3h#uj CMZ*Y4uԚUjI]s F#A~ 2Drz*Io=B˜Bܮ/9zeJ--Z=⧶h=EVPdN}k|M *ړ!\yF j3$riAAGU/}^:qZ7?}" } q)PEmKI,/G*<{9c6..?/˱xx&ۗr.NnV&^W3V:#`$2ƱR&@rU0U-2~~< /}0V.eᇳ{~8+?-q<~ +%}(بG)hwG#TRL3^kPԯxLotO+U89'=ڽA`=F4JHqU eG߼ E>/7VkOV("@Z ߔ-g ;u(a(\ wm#I057qd'A]Bik#SDۉ[M6#r-)!0X<ꧫ?*w/Q] qu]1ћԼ@Lו]ǻ۽0#RI6;^cg1Ymӭ7l&*"Xm_[)fU_{hBl Eя n97댛fߢ.~fiCPYA;Ygukv2Ҭs9JZSoVFZ1.?4jCgeZ1.+&.?'7*,}Q]Ƅ gjc λhˉnbkc-#B! uDZU#ZǼ\UZ^^R]\^u V+W}O5L3EoՇ7ֵpK$5}ߨMF'R|'j'tx̉N6O5=9*^|vt:tR%4ZU\睫jy`wTzn!tzuHp_QwaWu9V;{M|q: KE~ Ww3GʝB~r]ɦj69O/n@Uec:ct{DqSZ?^"n밐DW){7M ۪21U߱ m2=#a!?>ئ5$nyGIbpGI=&Bx+YdjM&\L{(j@߂qVye?`khY)&[a>Z`Z[RE3*hxNA{}c1d0T'2YE%n\%G.7pJj2%w%^V)V4q&꟒X E滺= 6ZtKI($aԊ$KxHd6s6謲/Js p.u g*l*^P5>e lx Qwɒʭ*%{R.X-=Q  81GFV?a' ,\-Fdrɵ[m 3T-JH_zgU7-S7b=x]DUEwu__&9Ww,1{h@Ԛ%;T־!}~ sRHVEyxk+3V:sRr N<4eNKDJN;Njѓq]=}u&.&ףI|K8 ¿6W9]k0!M2rp2(Ͻ7&x^/+t0ȸWk/3é/ ;{Z{fg'9*}^>^=2w2K#`.Z@ ]4Bϭ5U;B _Lgh w޿_ЦuhW}sٹAW;~A^_pΊxsIi9x4 k|z`E~;tf(_꿾=u0??~{=}#pz28? ?jV\OA1B:(3Km\J(zϜ~|pt~é5J޿WU>ퟓ鸼gKՔ4:ip]6xtr2ԏLUnw? O᯷PQ|*u0ay7z9:KYqzWjޔu..Y JfȬgWq]LV){0,ZM+LJR`sSe:P2piGjCHY㨏+uﲮo\>|x݌.hdxGb;Q}8O@H;]JErjuKi@6n6n6n6~z[1zhx訲A"oPdX &C~pbe^.ޥ "\橨VÈz(ja8\BK.wg9 HDkhyev]iyevїy1%iyq4bp]rKŶ h[L<ՒȶwnmqXիKqe#n{nAV F\բ]uĈbUԊbP[0ymn#F\UCTRhb2)) `s*u|m`:Rߺ TSIKi@TRJjSIm*M%6T2SDB[n=#SI }In1vE䧡b+OCϨk7j065tg5[q3z1C #pTUjY ޶alڢ-jڢ U-9ucP[TEUGx\FxIVm$KmжAё%1JH.4T!-@3F 0°BEGxJx;y*Zc#P_T C CQp]ՃnCbL1v1@v,h#4P wߖyj}Jw|fNERlv؋ׂ?g%ˌLNF#k'*SL~}vf^1cOnv2$)5qZ@I?(@vyQ~Ӡ +sדnyZA7JVwЇc1&O}Y=0`<]zwVhX,l¬Ni9C wV +_ޘ'=.ҏ:b|Zܧ!2Y &q]mkVYF4hH#XVL^χWr6 1oMDǝ)wV m׬ (EsEH9mֿ]Z,i}s2:Y6";o|6@H~ԩ KϮu~@EXa2ˈ#AδuII9R(OYCMF(Jo5hIiScl@zRΕJ&TQoKFgTsA3jض%?yyo!5>ekeR̈@2$))"sQ#sVUۯ0?]a3׫4, wlvuǝV Ry僞0J:H_ä$ig_{78 =HN?–wi1S n'xMh͝'w.Y^ghŋA>/z1$]FF^g4&pA]DẮ(=矄$4H$l&6p4j>rP2XRdB4k 3>K6spp=6DL9`iftP}i #(aD93ںr!bБb]T~v1/*i1Ỳ0!b  b#[Ŋ3w`UI0nn\I:&v)UUE2TXYBswSnM'ER8Ws9o0+i5}J@DbZC]@5({gJˤϑ[p 33& Pb(|]s|՘lbOxZôgͬA4O2q¸P*!EjYqYO˘; N!50A P ,KHq@`xxJi%GG㋾<{\xБy]ʄht14 WdGE 2)rh)A|~O{`_qZP}E8&$IjO2|b>wf9œ ۾gɱ>cDLR |$)D8EE)qr]$›sb9O̴H]B R e:<:J¬ODjauNb1ޯAO|a'3r1} "\ /8GS#Z.I00m^ژA#,LEU[ǝ!)ɔ$)!BLa^&iE2| zՀAKciPvn09YPB&+&yA1bl<(0J)s2upr1n+>bK5`lf678iX܇ȁ{&<)!hlLsbӢSػ6dW"M ! 8 e}iSMQeJҔ44gdJ7pjh䝝b93ɐ$:d.;G=Xk xRXs_(N;ĦK}.mNYmϵDڕߕw `M9!B5e ~sBYA;SE7?p7~\{5drO͈+1%89 rWm @JEn-=vOg9bO9h%8ǠhmaW2YZsG재jeQC[t0%Y!'|# D,k> \JH9{RK޳୉bX*gEΩU> n3> {Pnl- v:A sC@<`ZX@ 7/J[uLry5DILP =2l* ?I$@Y&I)BIAvȆ倖oeoRjvxr\Wf uhTI&8ܼ;9R@J%>+o#n^)%g}ȱsUjah 7ܸV\7sS3w6iƅ(@ Ec]OyT{"$9LeɌRFZQm4I}R@<7pQV2^ }6Oq!ŝ'g0HVyv%i3\K0!.NeK:[5O3oYM!v0ZAJ*%;D+rϋ]0J4)Kg?ӣKU GMSCe$B=94q.0Z15Z5gl1p \D3n2uЊ ,YfJI/mwTswV刉:ц.J]9kFdG٠m7PT.띇l06z @~f&Ds$R e^*(H8}ri`!oЯ2>XQyލ,祫&@)3dY"N]u9kم:%A^7ӤX/I8"2K&) a@[}eoH` aR.ǔFf4ZOivW4JFzW{zS0:. RZEbi.RVat'Ap- i#-Vw>T ǩQQ#0XG.+$@wP"( egd*e3Su9s)acx݈P댑YPΉr2;pZ,OejG@i#%1-&;jMjn(=xWa eۢ. "T,@. ,tʈ^ ͣQ%`#pf3Q)";fGsA\mDfB(f+i,^>V=rA88;ZR;\V/8\|Q.L~0"&a-'%5x#!iȎcS/ (ӦS_thxh'|5h9R~=h+"k?Jr[o) ޞrT!4AytH)^}z}ї ]j{[#rtMrt</YP퓍mn@"r9t-iJYJZE :}ɿ{c^߽~=rܙb)MKq4J>qDy܊C4J7x@ڽĥ#kWB f |<7kQuq 7JZOO~~ݿ_欱j[Z elZ)l} |}-XҊ?U R~oMo/Ҳz+ExIʔ/'B8ɣJQ%7f9#~|^)徇a,Z9mF^Ce،[R?&`u5cpO9A)lenټHCT ڥHGGmLJ4^s3zh qZ'50.Y6_R?ko^}]Ojy#Yke93S:A IR$䔌69+=![Pl`9@-4OT~QP}p-ٛy"_Tj$8Y1ˌ $`"zA aA{G{mdɃҀ|/?#Qn-M +az+*⇃8'A6=}qM3GAg"AWbDhYItMG- EWuBE:}+ 7|]͠tz?/ۏV}^uw7+`]3׆W?8ۇqRiQCmm]G~E[ueIE)/l˲}bޚꪔ.-#>y[-K%pnԩ$jz>WbLV⪃=NUbu~c\ z p{iߤE7R${}dlO]P듰6)yIO/:HRPd2>IBjk:y~ ^ӏ[-࿛BIeB*'U~}1g?!SHA˱H9PZ7Vpju; Wt3Edӣ"v6SDV:H*-|Aqk|s\M,W f!*r_a>&go;j6S b #c2:"JLyxVz+'v3r5-ɤd5FY}NVqIov5-DzX5_{+G%Mgs5-Qj+ᶽejiμꭥIh^ۭr0'qL |E{Ƣ*-2 LZIˌҽOn%p nJ 52r%BHjZRcMUO=J+m$ԋv46i»3E\Fsn SJmVt 2@lp؆U9N*Ҡ,˔,8$4`-I&ƨ`L"tWiN/[5enF-˿3.{{cԢ@h.ZoGMVRʇ=D\Pu[]yoDc 98k29LP?nfҋDW h 3TvA")-p{ÒZ6X{;U 0*:O R@Wk̤l!haDYe>&q&}j8Qiþo+e:9&\m: .8Ý{=3|gs#y9|!6>Q]E7эj؉QԺwfV:\\n]D^ ]"҆Q ]*X\(,yIZcd#o'_{W㶍_CaCҸܗ$=(.ťA))qow6E.)Y'@ӵp8!Ear° |,ak P04xN868A%(`&4ץXȶe`E u.^1y::SAT@r]8|])]R5\]9ɊatV.nt<GFz7p;J .+񵓀UW?xYӲZ->]]Z5jp(Kw ~ vPfI3A ?ݏzWY.RzpD;5Fap uM6V7!H%+ѿ}?"1Om%'LL2rlU;z KwrvU ܟ+f Sj"7ltEbMnM~qՓuEڢ?0eJR9'R '& e{..q HDu\ᙯSjfch10tHg?V;ˠ䑖oHcPxcu)Gx 1GB\7ba;qV(";sN#ϝF5rӀY.L ep "(5䚱$gRl !]gDʇXM>Ѽ/o+_\3c_4oOug۬ xsVM_/ovLؚ*T(Z d4ʨD3#rC) DsR$R@$A 1m蓳$/dی[L{wfcgaC0dh@juCJS9`fH1PQo\J#*%LSs {z1<eˢ~ЖA-k]AO`vnjoX& MJ^O^ʳƭIBl[xU>f ,/2{>u6N#Ws3I 5^S!}=L@}Ǯ5^gv8ݓ`EXb}yWan1uT?e%Q-i3W(#CaDb:n=LAfⳭ<[U4J~Oo$cnN;BۀL+JU.[|^ukCC6) D"Cѳsͪh+/?gƇ]z?,Ԃ`%tVyC|V;[q)5U v3qKfb?GO d=0)P( P;_diIYXgT;Z%l;XR1c(9h{¾HD"E 1P F,.jmP(B(mA OУ!={!h/'0C*Q7ǐ)p.HsȪ29Mr5Fb'e+H\)+DT"wb+s0'Ed/ttF 0La%2oE]ƾ6+G Zct*6 *4pN<1L4L넫$MRh4IfL߀@tDwxB?Swa0ܧ{0ROLQj6>HXINA:2h `f0)Ʌ ;hVՈybfq¾(8,ր *PodriBu 3"l|zFdBs6O8gޜaU gsNJ_ٌb&sgc<-6.@`&&V܏q-IP f'fvb cQ[( }xSFVwYӽ!n*9=2[ AlwS@76@x&9Z+b! [ËT7ɷށZG S '= :'Q;W\(*G G`.򎀧HX8 #T+WaO2RmYy=Til;'j$S{b}7Z" +@_ѕB w屩sv1 ap7S?֫x9TQСQx7a#;8j&C)0l$:ChH-f T{A":t`5Bvt_ICf蜚VT7^3B("1b5|GpVUiM#ދ|ZАgQ:0zߺRp@ľ#ĺ }YhE=[U4FR8gtɱN;Bېze xZ64䙫hcbI[ =?O~:"=1 )-\dAՌ"'Yt' (ڪ,\*º-F/.٢@vyy}5_~cg럚\%v߸xŗ%Ю3jYg+ͯgooo/^`"fs)|v6;]$[s3y|[7,[9V+:e@KԨP-dfPTM%ٔ ȧ| I-Yk[~}y( N1"'Ӯs$|dswec'Y)pvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000004753123015136715070017707 0ustar rootrootJan 29 16:34:36 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 16:34:36 crc restorecon[4741]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:36 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:34:37 crc restorecon[4741]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 16:34:38 crc kubenswrapper[4746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:34:38 crc kubenswrapper[4746]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 16:34:38 crc kubenswrapper[4746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:34:38 crc kubenswrapper[4746]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:34:38 crc kubenswrapper[4746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:34:38 crc kubenswrapper[4746]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.155909 4746 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.159929 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.159948 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.159954 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.159959 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160004 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160012 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160018 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160023 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160028 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160033 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160038 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160043 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160048 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160053 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160059 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160066 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160072 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160078 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160084 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160090 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160105 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160112 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160119 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160125 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160131 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160138 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160144 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160150 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160156 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160162 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160170 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160176 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160182 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160188 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160215 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160222 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160228 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160237 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160244 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160251 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160258 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160267 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160273 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160280 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160286 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160295 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160301 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160307 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160315 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160322 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160329 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160335 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160341 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160347 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160353 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160359 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160365 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160372 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160378 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160384 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160390 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160396 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160402 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160408 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160414 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160420 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160426 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160432 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160442 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160450 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.160457 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161456 4746 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161471 4746 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161483 4746 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161490 4746 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161498 4746 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161504 4746 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161511 4746 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161519 4746 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161525 4746 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161530 4746 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161538 4746 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161544 4746 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161551 4746 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161557 4746 flags.go:64] FLAG: --cgroup-root="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161563 4746 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161569 4746 flags.go:64] FLAG: --client-ca-file="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161575 4746 flags.go:64] FLAG: --cloud-config="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161581 4746 flags.go:64] FLAG: --cloud-provider="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161587 4746 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161594 4746 flags.go:64] FLAG: --cluster-domain="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161599 4746 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161605 4746 flags.go:64] FLAG: --config-dir="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161611 4746 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161617 4746 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161625 4746 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161631 4746 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161636 4746 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161642 4746 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161649 4746 flags.go:64] FLAG: --contention-profiling="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.161654 4746 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162582 4746 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162593 4746 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162600 4746 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162608 4746 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162615 4746 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162622 4746 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162628 4746 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162636 4746 flags.go:64] FLAG: --enable-server="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162643 4746 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162652 4746 flags.go:64] FLAG: --event-burst="100" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162659 4746 flags.go:64] FLAG: --event-qps="50" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162665 4746 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162671 4746 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162677 4746 flags.go:64] FLAG: --eviction-hard="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162686 4746 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162691 4746 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162698 4746 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162704 4746 flags.go:64] FLAG: --eviction-soft="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162710 4746 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162716 4746 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162722 4746 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162728 4746 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162734 4746 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162740 4746 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162746 4746 flags.go:64] FLAG: --feature-gates="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162753 4746 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162760 4746 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162766 4746 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162772 4746 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162779 4746 flags.go:64] FLAG: --healthz-port="10248" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162785 4746 flags.go:64] FLAG: --help="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162791 4746 flags.go:64] FLAG: --hostname-override="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162798 4746 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162804 4746 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162810 4746 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162816 4746 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162821 4746 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162827 4746 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162832 4746 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162838 4746 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162844 4746 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162849 4746 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162855 4746 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162862 4746 flags.go:64] FLAG: --kube-reserved="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162868 4746 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162874 4746 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162881 4746 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162886 4746 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162892 4746 flags.go:64] FLAG: --lock-file="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162898 4746 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162904 4746 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162910 4746 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162919 4746 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162925 4746 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162930 4746 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162935 4746 flags.go:64] FLAG: --logging-format="text" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162941 4746 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162947 4746 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162953 4746 flags.go:64] FLAG: --manifest-url="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162959 4746 flags.go:64] FLAG: --manifest-url-header="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162970 4746 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162976 4746 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162983 4746 flags.go:64] FLAG: --max-pods="110" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162988 4746 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.162995 4746 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163000 4746 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163007 4746 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163012 4746 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163018 4746 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163024 4746 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163037 4746 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163043 4746 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163049 4746 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163054 4746 flags.go:64] FLAG: --pod-cidr="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163059 4746 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163069 4746 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163075 4746 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163080 4746 flags.go:64] FLAG: --pods-per-core="0" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163086 4746 flags.go:64] FLAG: --port="10250" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163092 4746 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163097 4746 flags.go:64] FLAG: --provider-id="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163103 4746 flags.go:64] FLAG: --qos-reserved="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163109 4746 flags.go:64] FLAG: --read-only-port="10255" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163114 4746 flags.go:64] FLAG: --register-node="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163120 4746 flags.go:64] FLAG: --register-schedulable="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163126 4746 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163141 4746 flags.go:64] FLAG: --registry-burst="10" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163147 4746 flags.go:64] FLAG: --registry-qps="5" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163152 4746 flags.go:64] FLAG: --reserved-cpus="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163157 4746 flags.go:64] FLAG: --reserved-memory="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163164 4746 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163170 4746 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163176 4746 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163182 4746 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163216 4746 flags.go:64] FLAG: --runonce="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163222 4746 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163229 4746 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163236 4746 flags.go:64] FLAG: --seccomp-default="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163242 4746 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163249 4746 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163255 4746 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163261 4746 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163267 4746 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163273 4746 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163279 4746 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163284 4746 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163290 4746 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163297 4746 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163303 4746 flags.go:64] FLAG: --system-cgroups="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163308 4746 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163317 4746 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163322 4746 flags.go:64] FLAG: --tls-cert-file="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163328 4746 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163335 4746 flags.go:64] FLAG: --tls-min-version="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163341 4746 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163348 4746 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163353 4746 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163359 4746 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163365 4746 flags.go:64] FLAG: --v="2" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163373 4746 flags.go:64] FLAG: --version="false" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163380 4746 flags.go:64] FLAG: --vmodule="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163387 4746 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163393 4746 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163543 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163550 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163556 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163562 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163567 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163574 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163580 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163586 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163593 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163600 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163605 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163610 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163615 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163620 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163625 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163630 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163635 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163640 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163645 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163650 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163654 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163659 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163664 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163670 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163676 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163681 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163689 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163695 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163702 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163708 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163713 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163718 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163723 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163728 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163735 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163740 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163745 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163750 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163755 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163760 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163764 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163769 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163775 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163781 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163787 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163793 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163798 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163804 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163809 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163814 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163819 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163824 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163829 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163834 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163838 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163843 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163848 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163853 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163859 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163864 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163869 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163874 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163879 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163884 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163890 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163895 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163901 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163905 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163910 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163915 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.163920 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.163934 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.177447 4746 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.177509 4746 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177687 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177721 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177732 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177749 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177758 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177768 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177777 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177785 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177794 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177803 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177811 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177820 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177829 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177838 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177847 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177856 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177864 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177873 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177881 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177893 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177901 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177910 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177919 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177927 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177936 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177945 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177954 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177962 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177971 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177979 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.177991 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178003 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178013 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178022 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178031 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178040 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178049 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178059 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178068 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178077 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178086 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178098 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178112 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178121 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178131 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178142 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178150 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178160 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178169 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178178 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178216 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178226 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178234 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178242 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178251 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178262 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178271 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178279 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178287 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178298 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178307 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178316 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178324 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178334 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178342 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178351 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178360 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178368 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178379 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178390 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178402 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.178417 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178728 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178753 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178764 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178773 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178783 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178794 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178806 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178818 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178831 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178844 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178857 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178870 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178881 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178893 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178904 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178915 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178926 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178937 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178947 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178960 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178971 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178980 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178988 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.178997 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179006 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179015 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179024 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179032 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179040 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179049 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179057 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179066 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179074 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179083 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179091 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179100 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179108 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179117 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179126 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179134 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179142 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179151 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179160 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179168 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179176 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179221 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179231 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179239 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179247 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179256 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179267 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179279 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179289 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179300 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179309 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179320 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179329 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179338 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179347 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179356 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179364 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179372 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179381 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179390 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179398 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179407 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179415 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179425 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179433 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179441 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.179450 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.179465 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.181154 4746 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.188451 4746 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.188628 4746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.192832 4746 server.go:997] "Starting client certificate rotation" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.192885 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.193136 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-03 12:40:07.454954214 +0000 UTC Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.193325 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.226273 4746 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.229914 4746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.232726 4746 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.263334 4746 log.go:25] "Validated CRI v1 runtime API" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.311263 4746 log.go:25] "Validated CRI v1 image API" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.315347 4746 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.322695 4746 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-16-24-46-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.322741 4746 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.345368 4746 manager.go:217] Machine: {Timestamp:2026-01-29 16:34:38.341250672 +0000 UTC m=+0.741835356 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a3b8f3d1-c6d9-472d-8c83-12b7d56140ac BootID:36d7a0f4-88b9-425a-915e-1df9cb8c68bf Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:12:cb:1a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:12:cb:1a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:bc:7c:ce Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:82:5c:a9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b9:f3:e7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6b:ab:bf Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:e0:0b:85 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:7b:f2:ff:46:12 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:e0:90:85:23:c8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.345694 4746 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.345842 4746 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.346168 4746 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.346431 4746 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.346472 4746 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.346745 4746 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.346758 4746 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.348104 4746 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.348145 4746 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.348353 4746 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.348451 4746 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.353848 4746 kubelet.go:418] "Attempting to sync node with API server" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.353875 4746 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.353933 4746 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.353949 4746 kubelet.go:324] "Adding apiserver pod source" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.353966 4746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.358768 4746 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.359445 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.359394 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.359515 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.359531 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.360713 4746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.364159 4746 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368641 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368694 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368716 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368735 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368766 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368785 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368804 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368834 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368857 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368876 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368902 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.368921 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.370519 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.371314 4746 server.go:1280] "Started kubelet" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.372757 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.373586 4746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.374003 4746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.375300 4746 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:34:38 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.382926 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.384870 4746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.385546 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:58:33.332788683 +0000 UTC Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.385655 4746 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.385821 4746 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.385336 4746 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.386333 4746 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.386702 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.386898 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.387586 4746 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.387838 4746 factory.go:55] Registering systemd factory Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.388020 4746 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.387963 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="200ms" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.388537 4746 factory.go:153] Registering CRI-O factory Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.388567 4746 factory.go:221] Registration of the crio container factory successfully Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.388653 4746 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.388687 4746 factory.go:103] Registering Raw factory Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.388709 4746 manager.go:1196] Started watching for new ooms in manager Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.391475 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f40d8c6bcc358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:34:38.371267416 +0000 UTC m=+0.771852110,LastTimestamp:2026-01-29 16:34:38.371267416 +0000 UTC m=+0.771852110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.392439 4746 manager.go:319] Starting recovery of all containers Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.404961 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405032 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405047 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405058 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405067 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405079 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405087 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405096 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405108 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405119 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405129 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405142 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405158 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405170 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405180 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405208 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405223 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405259 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405271 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405282 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405294 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405306 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405321 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405332 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405345 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405355 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405366 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405378 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405389 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405400 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405411 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405421 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405435 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405447 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405459 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405469 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405478 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405488 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405499 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405510 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405522 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405572 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405587 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405601 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405617 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405630 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405642 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405656 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405670 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405686 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405698 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405710 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405723 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405735 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405747 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405759 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405768 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405777 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405786 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405794 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405806 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405816 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405826 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405835 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405844 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405856 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405868 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405879 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405890 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405901 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405913 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405924 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405941 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405952 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405963 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405975 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405986 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.405996 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406006 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406016 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406030 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406040 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406051 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406062 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406074 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406083 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406092 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406101 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406111 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406120 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406130 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406140 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406153 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406163 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406180 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406205 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406215 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406225 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406234 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406244 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406256 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406265 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406275 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406290 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406304 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406340 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406351 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406362 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406372 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406384 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406395 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406407 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406418 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406428 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406437 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406447 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406457 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406467 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406475 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406484 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406493 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406506 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406514 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406523 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406531 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406540 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406552 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406562 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406572 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406582 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406590 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406598 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406606 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406616 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406626 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406636 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406645 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406656 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406666 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406676 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406685 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406696 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406705 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406768 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406782 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406790 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406799 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406810 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406820 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406829 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406841 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406854 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406863 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406879 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406890 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406900 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406908 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406918 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406928 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406937 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406945 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406953 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406964 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406974 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406982 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.406992 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407002 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407012 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407023 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407033 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407042 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407052 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407063 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407074 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407086 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407094 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407104 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407113 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407121 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407131 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407141 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407151 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407164 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.407174 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409053 4746 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409080 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409092 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409102 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409111 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409121 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409131 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409139 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409149 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409157 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409166 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409175 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409188 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409238 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409247 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409256 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409264 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409275 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409285 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409299 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409382 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409392 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409402 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409412 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409422 4746 reconstruct.go:97] "Volume reconstruction finished" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.409431 4746 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.414287 4746 manager.go:324] Recovery completed Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.433403 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.438218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.438274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.438283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.439511 4746 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.439529 4746 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.439564 4746 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.441403 4746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.444272 4746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.444422 4746 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.444461 4746 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.444522 4746 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:34:38 crc kubenswrapper[4746]: W0129 16:34:38.447004 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.447113 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.463258 4746 policy_none.go:49] "None policy: Start" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.465217 4746 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.465255 4746 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.485970 4746 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.506114 4746 manager.go:334] "Starting Device Plugin manager" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.506228 4746 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.506257 4746 server.go:79] "Starting device plugin registration server" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.506730 4746 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.506757 4746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.506936 4746 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.507075 4746 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.507105 4746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.520864 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.546737 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.546848 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.548715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.548786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.548803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.549048 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.550230 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.550307 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551786 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.551988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.552404 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.552431 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.553319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.553374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.553392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.553614 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.553987 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.554060 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.555538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.555582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.555598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556848 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556937 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.556973 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.557980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.558376 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.559299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.559349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.559371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.588901 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="400ms" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.607299 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611251 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611389 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611809 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611904 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.611967 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612023 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.612036 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612125 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612230 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612313 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612374 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612434 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612493 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612552 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612622 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.612678 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714638 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714782 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714834 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714875 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714916 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714966 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.714946 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715084 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715129 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715037 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715171 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715228 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715094 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715122 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715247 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715295 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715384 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715438 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715456 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715527 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715530 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715573 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715581 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715633 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.715784 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.813072 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.814874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.814935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.814971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.815010 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.815637 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.877830 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.883912 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.897323 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.909219 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: I0129 16:34:38.912670 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:38 crc kubenswrapper[4746]: E0129 16:34:38.990795 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="800ms" Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.035865 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-053adac19f04a0c29a0d30ed568c245778faa7e7c6e7f8ecbb0ba3516dd009ea WatchSource:0}: Error finding container 053adac19f04a0c29a0d30ed568c245778faa7e7c6e7f8ecbb0ba3516dd009ea: Status 404 returned error can't find the container with id 053adac19f04a0c29a0d30ed568c245778faa7e7c6e7f8ecbb0ba3516dd009ea Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.039785 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-77b2cb72b790ee763a1d9e748faa57a212992737541eb119833a3b91fc6acc7f WatchSource:0}: Error finding container 77b2cb72b790ee763a1d9e748faa57a212992737541eb119833a3b91fc6acc7f: Status 404 returned error can't find the container with id 77b2cb72b790ee763a1d9e748faa57a212992737541eb119833a3b91fc6acc7f Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.040255 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e3efbefaecbbb03859868fdb372d9dad572d5b3182ab79d73b9926d728271087 WatchSource:0}: Error finding container e3efbefaecbbb03859868fdb372d9dad572d5b3182ab79d73b9926d728271087: Status 404 returned error can't find the container with id e3efbefaecbbb03859868fdb372d9dad572d5b3182ab79d73b9926d728271087 Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.041992 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0bdd992a71d98ce066ea5ebd608e4a874ff8de483d30791733671edcc674a5d9 WatchSource:0}: Error finding container 0bdd992a71d98ce066ea5ebd608e4a874ff8de483d30791733671edcc674a5d9: Status 404 returned error can't find the container with id 0bdd992a71d98ce066ea5ebd608e4a874ff8de483d30791733671edcc674a5d9 Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.216155 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.218002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.218066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.218093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.218148 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:34:39 crc kubenswrapper[4746]: E0129 16:34:39.218901 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.342958 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:39 crc kubenswrapper[4746]: E0129 16:34:39.343094 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.373626 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.385713 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:43:21.216367605 +0000 UTC Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.451055 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3efbefaecbbb03859868fdb372d9dad572d5b3182ab79d73b9926d728271087"} Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.452740 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"77b2cb72b790ee763a1d9e748faa57a212992737541eb119833a3b91fc6acc7f"} Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.454367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"053adac19f04a0c29a0d30ed568c245778faa7e7c6e7f8ecbb0ba3516dd009ea"} Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.455835 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5a973d2041cd1998e58cb2831b9437d9d248ac29a0df5c17822538c51854f1d9"} Jan 29 16:34:39 crc kubenswrapper[4746]: I0129 16:34:39.457266 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0bdd992a71d98ce066ea5ebd608e4a874ff8de483d30791733671edcc674a5d9"} Jan 29 16:34:39 crc kubenswrapper[4746]: E0129 16:34:39.500271 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f40d8c6bcc358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:34:38.371267416 +0000 UTC m=+0.771852110,LastTimestamp:2026-01-29 16:34:38.371267416 +0000 UTC m=+0.771852110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:34:39 crc kubenswrapper[4746]: E0129 16:34:39.792406 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="1.6s" Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.821038 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:39 crc kubenswrapper[4746]: E0129 16:34:39.821154 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:39 crc kubenswrapper[4746]: W0129 16:34:39.935724 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:39 crc kubenswrapper[4746]: E0129 16:34:39.935827 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:40 crc kubenswrapper[4746]: W0129 16:34:40.015236 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:40 crc kubenswrapper[4746]: E0129 16:34:40.015846 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.019320 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.023391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.023447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.023462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.023495 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:34:40 crc kubenswrapper[4746]: E0129 16:34:40.024089 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.243162 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:34:40 crc kubenswrapper[4746]: E0129 16:34:40.244959 4746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.373874 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.385895 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:40:23.399822127 +0000 UTC Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.463927 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="192f4fb53984a486c588b526f865200adc00c5f8311d97ba008743ec0c8f4dcb" exitCode=0 Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.464029 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.464055 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"192f4fb53984a486c588b526f865200adc00c5f8311d97ba008743ec0c8f4dcb"} Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.465592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.465631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.465644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.467141 4746 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78" exitCode=0 Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.467233 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.467241 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78"} Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.468214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.468241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.468257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.470421 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05a8ae04908d54131a10e9728e62c86ea2049a2763034e79cd18f55ccbd2e2a2" exitCode=0 Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.470511 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05a8ae04908d54131a10e9728e62c86ea2049a2763034e79cd18f55ccbd2e2a2"} Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.470758 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.472644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.472666 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.472680 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.475721 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff"} Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.475761 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a"} Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.478763 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051" exitCode=0 Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.478794 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051"} Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.478894 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.480106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.480151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.480167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.483878 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.485289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.485321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:40 crc kubenswrapper[4746]: I0129 16:34:40.485336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.374303 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.386551 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:02:21.337772171 +0000 UTC Jan 29 16:34:41 crc kubenswrapper[4746]: E0129 16:34:41.393213 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="3.2s" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.484946 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.484991 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.485003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.485019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.486052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.486113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.486124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.486843 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="12a0373be3ea275fe6876718fdc1df94e27df33b198e0ddf1d713d1be50c9005" exitCode=0 Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.487004 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.487139 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"12a0373be3ea275fe6876718fdc1df94e27df33b198e0ddf1d713d1be50c9005"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.487637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.487672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.487687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.489753 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.489786 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.489805 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.490501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.490526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.490536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.499167 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.499288 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.499309 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.499324 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.502318 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cdd76eb4e6bf9599f73d94b8eecb739f0950ba9299e9b756a03b614be53a37ef"} Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.502399 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.503464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.503533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.503552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.624717 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.626018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.626061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.626074 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:41 crc kubenswrapper[4746]: I0129 16:34:41.626103 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:34:41 crc kubenswrapper[4746]: E0129 16:34:41.626579 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.22:6443: connect: connection refused" node="crc" Jan 29 16:34:42 crc kubenswrapper[4746]: W0129 16:34:42.018004 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:42 crc kubenswrapper[4746]: E0129 16:34:42.018149 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:42 crc kubenswrapper[4746]: W0129 16:34:42.083417 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.22:6443: connect: connection refused Jan 29 16:34:42 crc kubenswrapper[4746]: E0129 16:34:42.083566 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.22:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.386742 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:33:21.321054809 +0000 UTC Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.507621 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b13ec5614d04de6114de182f2aa8828a997725d039ea44d8cfa4c89fa46e5386" exitCode=0 Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.507720 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b13ec5614d04de6114de182f2aa8828a997725d039ea44d8cfa4c89fa46e5386"} Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.507905 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.509474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.509555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.509585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.514679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676"} Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.514816 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.514881 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.515007 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.515629 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.515734 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.516041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.516079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.516091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.516319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.516356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.516370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.517340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.517389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.517412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.517423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.517393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:42 crc kubenswrapper[4746]: I0129 16:34:42.517501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.386893 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:43:00.41476372 +0000 UTC Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.494410 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.518475 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.520930 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b021159753de142ea41bdd26a96492402b41e4f68de64adb2f0dbbbf793fc9a5"} Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.520996 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.520992 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd655e401067009559dc24f01e9dfe2ff2cf56bc16b650cf207ec7b9096e691d"} Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.521101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d408440453e50c43d6d0b03d2e877128bd6633071dd5556b9ef0cc3b19baf050"} Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.520999 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.521118 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9525d413168cb0505eee3863df24b198e53e061ed9013203e9671dfb08505780"} Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.521147 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.521168 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.523724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.523760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.523772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.523782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.523814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:43 crc kubenswrapper[4746]: I0129 16:34:43.523828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.155389 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.369427 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.387911 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:31:34.687831077 +0000 UTC Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.528264 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8bfc104e010ce4dd572704d509c720e0abde9b97c06d3c4252a24030f416f8b9"} Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.528307 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.528419 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.528442 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529751 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.529877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.671439 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.827637 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.829532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.829588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.829608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:44 crc kubenswrapper[4746]: I0129 16:34:44.829651 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.389072 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:54:09.458523447 +0000 UTC Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.530505 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.530526 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.530611 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532144 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:45 crc kubenswrapper[4746]: I0129 16:34:45.532314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:46 crc kubenswrapper[4746]: I0129 16:34:46.389953 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:08:50.036452348 +0000 UTC Jan 29 16:34:46 crc kubenswrapper[4746]: I0129 16:34:46.433340 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:34:46 crc kubenswrapper[4746]: I0129 16:34:46.433555 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:46 crc kubenswrapper[4746]: I0129 16:34:46.434789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:46 crc kubenswrapper[4746]: I0129 16:34:46.434827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:46 crc kubenswrapper[4746]: I0129 16:34:46.434839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.391107 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:44:52.782790419 +0000 UTC Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.448800 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.449022 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.450626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.450681 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.450704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.672090 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:34:47 crc kubenswrapper[4746]: I0129 16:34:47.672265 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:34:48 crc kubenswrapper[4746]: I0129 16:34:48.392120 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:09:27.429881549 +0000 UTC Jan 29 16:34:48 crc kubenswrapper[4746]: E0129 16:34:48.521142 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.293359 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.293714 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.296030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.296096 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.296111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.301581 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.393157 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:45:56.764023117 +0000 UTC Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.441523 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.441805 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.443357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.443409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.443426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.543143 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.545513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.545593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:49 crc kubenswrapper[4746]: I0129 16:34:49.545618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:50 crc kubenswrapper[4746]: I0129 16:34:50.394168 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:32:05.679339129 +0000 UTC Jan 29 16:34:51 crc kubenswrapper[4746]: I0129 16:34:51.394849 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:23:08.886648754 +0000 UTC Jan 29 16:34:52 crc kubenswrapper[4746]: W0129 16:34:52.289601 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.289707 4746 trace.go:236] Trace[312085759]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:34:42.288) (total time: 10001ms): Jan 29 16:34:52 crc kubenswrapper[4746]: Trace[312085759]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:34:52.289) Jan 29 16:34:52 crc kubenswrapper[4746]: Trace[312085759]: [10.001269451s] [10.001269451s] END Jan 29 16:34:52 crc kubenswrapper[4746]: E0129 16:34:52.289736 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.388677 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.395101 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:05:59.708876438 +0000 UTC Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.552796 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.554990 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676" exitCode=255 Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.555038 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676"} Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.555225 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.556304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.556338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.556350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.556862 4746 scope.go:117] "RemoveContainer" containerID="2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.601634 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.601718 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.616304 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:34:52 crc kubenswrapper[4746]: I0129 16:34:52.616394 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.216143 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.395518 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:48:45.234704473 +0000 UTC Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.559758 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.561778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c"} Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.561902 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.562858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.562941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.562956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.823228 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.823574 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.824924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.824965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.824976 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:53 crc kubenswrapper[4746]: I0129 16:34:53.865096 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.396717 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:48:26.168593904 +0000 UTC Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.456169 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.565227 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.565258 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.565388 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.566726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.566790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.566814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.566903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.567003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:54 crc kubenswrapper[4746]: I0129 16:34:54.567137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.397720 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:12:59.452260931 +0000 UTC Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.567375 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.567531 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.568725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.568761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.568789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.569302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.569339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:55 crc kubenswrapper[4746]: I0129 16:34:55.569353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:56 crc kubenswrapper[4746]: I0129 16:34:56.398574 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:48:05.982136596 +0000 UTC Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.400018 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:40:15.471627652 +0000 UTC Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.454418 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.454680 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.456820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.456885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.456899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.466332 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.572531 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.573540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.573583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.573594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:34:57 crc kubenswrapper[4746]: E0129 16:34:57.608213 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.610744 4746 trace.go:236] Trace[1122800504]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:34:46.902) (total time: 10707ms): Jan 29 16:34:57 crc kubenswrapper[4746]: Trace[1122800504]: ---"Objects listed" error: 10707ms (16:34:57.610) Jan 29 16:34:57 crc kubenswrapper[4746]: Trace[1122800504]: [10.70789308s] [10.70789308s] END Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.610808 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.611368 4746 trace.go:236] Trace[1884683202]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:34:46.014) (total time: 11596ms): Jan 29 16:34:57 crc kubenswrapper[4746]: Trace[1884683202]: ---"Objects listed" error: 11596ms (16:34:57.611) Jan 29 16:34:57 crc kubenswrapper[4746]: Trace[1884683202]: [11.596813855s] [11.596813855s] END Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.611404 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:34:57 crc kubenswrapper[4746]: E0129 16:34:57.611993 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.613880 4746 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.615521 4746 trace.go:236] Trace[382658692]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:34:42.963) (total time: 14651ms): Jan 29 16:34:57 crc kubenswrapper[4746]: Trace[382658692]: ---"Objects listed" error: 14651ms (16:34:57.614) Jan 29 16:34:57 crc kubenswrapper[4746]: Trace[382658692]: [14.651606135s] [14.651606135s] END Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.615594 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.621861 4746 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.644811 4746 csr.go:261] certificate signing request csr-nkfsf is approved, waiting to be issued Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.658466 4746 csr.go:257] certificate signing request csr-nkfsf is issued Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.672555 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.672652 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:34:57 crc kubenswrapper[4746]: I0129 16:34:57.768601 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.191178 4746 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.191492 4746 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.191492 4746 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.191516 4746 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.191597 4746 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.365334 4746 apiserver.go:52] "Watching apiserver" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.379398 4746 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.379825 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-wlbj9","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.380403 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.380488 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.380591 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.381152 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.381253 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.381318 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.381339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.381342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.381473 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.381648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.382561 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.384743 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.384798 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.384853 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.384930 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.385312 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.385461 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.385478 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.385521 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.386041 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.386688 4746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.387877 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.388123 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.398752 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.400851 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:38:33.076861238 +0000 UTC Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.417349 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420434 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420470 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420495 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420544 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420564 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420587 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420618 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420641 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420662 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420687 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420710 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420783 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420802 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420827 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420846 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420863 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420888 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420907 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420971 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.420991 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421034 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421052 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421070 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421107 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421123 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421139 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421160 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421182 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421216 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421234 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421251 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421269 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421290 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421308 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421333 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421348 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421366 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421382 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421507 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421531 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421550 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421566 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421581 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421596 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421613 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421628 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421644 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421663 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421684 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421706 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421721 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421736 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421751 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421767 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421783 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421801 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421823 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421843 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421860 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421878 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421894 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421910 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421925 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421972 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421987 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422003 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422022 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422039 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422056 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422073 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422089 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422106 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422122 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422138 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422154 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422170 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422222 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422238 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422256 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421139 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422272 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422293 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421071 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421428 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421637 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421762 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.421861 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422014 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422038 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422096 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422256 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422608 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422624 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422652 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422786 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422928 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.423246 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.423763 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.424018 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.424037 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.424370 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.424443 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.424749 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.425505 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.422310 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.425898 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.425969 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426011 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426053 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426105 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426148 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426233 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426265 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426298 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426329 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426363 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426390 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426417 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426445 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426470 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426495 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426520 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426543 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426571 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426595 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426618 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426671 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426695 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426725 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426759 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426783 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426809 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426831 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426858 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426883 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426907 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426929 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426951 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426974 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.426996 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427009 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427018 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427082 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427134 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427150 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427168 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427203 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427221 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427245 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427267 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427286 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427375 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427396 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427395 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427418 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427443 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427469 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427494 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427543 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427566 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427604 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427627 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427650 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427666 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427687 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427703 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427723 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427741 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427764 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427783 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427804 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427824 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427845 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427865 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427886 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427905 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427935 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427958 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427978 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.427997 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428017 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428045 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428064 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428081 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428101 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428119 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428137 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428175 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428208 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428226 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428244 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428262 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428279 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428296 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428313 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428331 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428348 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428364 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428380 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428399 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428416 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428434 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428452 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428471 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428489 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428506 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428524 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428541 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428559 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428576 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428594 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428612 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428631 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428649 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428701 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4-hosts-file\") pod \"node-resolver-wlbj9\" (UID: \"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\") " pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428731 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428750 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428773 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428793 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428815 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428842 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428867 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428894 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428916 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.428932 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429002 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429025 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429057 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429089 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429123 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddwk\" (UniqueName: \"kubernetes.io/projected/1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4-kube-api-access-gddwk\") pod \"node-resolver-wlbj9\" (UID: \"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\") " pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429150 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429178 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429234 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429310 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429329 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429345 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429364 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429380 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429395 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429410 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429424 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429438 4746 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429452 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429465 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429483 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429504 4746 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429526 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429547 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429568 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429591 4746 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429605 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429619 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429636 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429649 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429738 4746 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429756 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429749 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429748 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429772 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429831 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429849 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429878 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429897 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429913 4746 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.429935 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430057 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430079 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430283 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430402 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430575 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430779 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.430976 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.430988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.431079 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:34:58.931043383 +0000 UTC m=+21.331628217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.431271 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:34:58.931169807 +0000 UTC m=+21.331754651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.431360 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.431793 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.432123 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.432603 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.432831 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.432834 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.432880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.432897 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433145 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433286 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433357 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433565 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433699 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433716 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.433870 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.438037 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.441548 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.441837 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.442125 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.442437 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.442778 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.443339 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.445326 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.445880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.448580 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.450436 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.450549 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.450727 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.450904 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.450920 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.451248 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.451455 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.451799 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.451847 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.451988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.452317 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.452378 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.452414 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.456413 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.456756 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.456767 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.456864 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.457105 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.457400 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.457647 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.457871 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.458947 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.459483 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.459555 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.459720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.460070 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.460083 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.460503 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.460533 4746 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.460859 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462158 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462203 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462260 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462572 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462611 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462627 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462894 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.462899 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.463599 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.464061 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.464549 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.464614 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.464688 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.464891 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.465214 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.465503 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.465559 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.465607 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:34:58.965576623 +0000 UTC m=+21.366161287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.465634 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.465732 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.465977 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.466348 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.466579 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.466867 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.466933 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.467655 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.467782 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.468938 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.469087 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.469372 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.469480 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.469880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.470337 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.470516 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.471491 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.471904 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.475131 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.480483 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.481210 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.482170 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.482221 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.482797 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.482995 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.483089 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.483361 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.483488 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.483517 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.483775 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.485385 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.485692 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.485758 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.486133 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.484250 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.483961 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.486529 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.486617 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.486693 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.486750 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.486769 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.486751 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.486801 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.486885 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:34:58.986853072 +0000 UTC m=+21.387437716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.489483 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.489692 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.489737 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.489894 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.489916 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.485296 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.490233 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.490449 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.491052 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.491131 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.491245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.491309 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.492079 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.493591 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.493610 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.493625 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.493707 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:34:58.993684579 +0000 UTC m=+21.394269233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.496578 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.498864 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.500147 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.500235 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.500778 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.500900 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.501543 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.501603 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.501862 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.503746 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.507227 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.507610 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.507923 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.508269 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.508428 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.508803 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.509110 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.509217 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.509426 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.509563 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.509742 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.510380 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.510668 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.511485 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.511924 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.511978 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.512302 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.512607 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.515020 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.515080 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.515577 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.518181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.518594 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.518841 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.519130 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.519845 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.521323 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.521392 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.522201 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.524578 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.533682 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gddwk\" (UniqueName: \"kubernetes.io/projected/1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4-kube-api-access-gddwk\") pod \"node-resolver-wlbj9\" (UID: \"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\") " pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534134 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4-hosts-file\") pod \"node-resolver-wlbj9\" (UID: \"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\") " pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534158 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534230 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534308 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534328 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534345 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534360 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534372 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534384 4746 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534397 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534410 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534424 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534437 4746 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534449 4746 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534461 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534472 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534484 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534496 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534508 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534519 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534529 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534541 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534551 4746 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534562 4746 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534574 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534585 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534595 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534605 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534615 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534625 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534634 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534644 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534653 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534663 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534673 4746 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534682 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534693 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534702 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534714 4746 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534725 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534736 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534746 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534755 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534765 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534775 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534786 4746 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534794 4746 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534804 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534812 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534821 4746 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534831 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534840 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534849 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534858 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534867 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534877 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534888 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534899 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534908 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534919 4746 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534928 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534939 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534947 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534958 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534967 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534976 4746 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534986 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.534995 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535005 4746 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535015 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535024 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535034 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535043 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535052 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535062 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535071 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535082 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535093 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535103 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535115 4746 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535125 4746 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535134 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535143 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535152 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535171 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535180 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535204 4746 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535214 4746 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535224 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535233 4746 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535245 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535254 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535264 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535275 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535285 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535296 4746 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535306 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535316 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535326 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535326 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535338 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535388 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535408 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535424 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535440 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535453 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535467 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535482 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535497 4746 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535513 4746 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535528 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535541 4746 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535555 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535569 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535582 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535595 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535611 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535624 4746 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535638 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535651 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535705 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535719 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535732 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535746 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535761 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535775 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535789 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.535388 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.536262 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537069 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537365 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4-hosts-file\") pod \"node-resolver-wlbj9\" (UID: \"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\") " pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537548 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537696 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537712 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537730 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537741 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537753 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537763 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537773 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537782 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537791 4746 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537802 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537811 4746 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537820 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537832 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537843 4746 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537854 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537864 4746 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537873 4746 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537883 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537893 4746 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537904 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537914 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537926 4746 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537941 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537954 4746 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537967 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537979 4746 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.537992 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538004 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538016 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538030 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538043 4746 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538054 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538066 4746 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538076 4746 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538087 4746 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538097 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538107 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.538117 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.541720 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.542462 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.543091 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.543510 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.549587 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.550159 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.551214 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.553417 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.554290 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.559862 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gddwk\" (UniqueName: \"kubernetes.io/projected/1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4-kube-api-access-gddwk\") pod \"node-resolver-wlbj9\" (UID: \"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\") " pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.561790 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.562836 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.565212 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.565723 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.566227 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.568999 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.569878 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.570471 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.571579 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.572092 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.572452 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.573342 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.573841 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.575028 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.575781 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.576764 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.577428 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.577967 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.578806 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.579140 4746 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.579319 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.579650 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.581110 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.581611 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c" exitCode=255 Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.582138 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.582654 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.583087 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.584790 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.585953 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.586622 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.587865 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.588664 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.589227 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.590308 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.591397 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.592100 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.593062 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.593749 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.593782 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.594861 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.595796 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.597008 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.597564 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.598076 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.599113 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.599766 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.600744 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.601410 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8vzgw"] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.601751 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c"} Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.601788 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7j88d"] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.601994 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.602001 4746 scope.go:117] "RemoveContainer" containerID="2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.602542 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-74h7n"] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.602694 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.602804 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.605712 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.605902 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606026 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606145 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606244 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606377 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606430 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606528 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606592 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.606603 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.608625 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.608659 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.608815 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.623463 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.638822 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.638855 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.638867 4746 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.638878 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.640087 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.649791 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.660109 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 16:29:57 +0000 UTC, rotation deadline is 2026-12-21 16:17:24.810913289 +0000 UTC Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.660300 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7823h42m26.150617743s for next certificate rotation Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.660597 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.671267 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.676798 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.677159 4746 scope.go:117] "RemoveContainer" containerID="8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.677458 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.680805 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.692141 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.697707 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.710429 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.713394 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d9228427f67e0638b9e5476d563ab79ca5f85fe3da76f9c894884b53bf18c760 WatchSource:0}: Error finding container d9228427f67e0638b9e5476d563ab79ca5f85fe3da76f9c894884b53bf18c760: Status 404 returned error can't find the container with id d9228427f67e0638b9e5476d563ab79ca5f85fe3da76f9c894884b53bf18c760 Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.713671 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.727339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.739610 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-etc-kubernetes\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.739666 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d5pt\" (UniqueName: \"kubernetes.io/projected/017d8376-e00b-442b-ac6b-b2189ff75132-kube-api-access-4d5pt\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.739696 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-system-cni-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.739784 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-kubelet\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740042 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w489f\" (UniqueName: \"kubernetes.io/projected/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-kube-api-access-w489f\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740153 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-hostroot\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740245 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-cnibin\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740444 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4vb\" (UniqueName: \"kubernetes.io/projected/c20d2bd9-a984-476f-855f-6a0365ccdab7-kube-api-access-5t4vb\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740522 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-system-cni-dir\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740602 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-os-release\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740699 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-socket-dir-parent\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740761 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-k8s-cni-cncf-io\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.740981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-cni-multus\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741054 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-netns\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741122 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-conf-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741203 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c20d2bd9-a984-476f-855f-6a0365ccdab7-rootfs\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741285 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c20d2bd9-a984-476f-855f-6a0365ccdab7-proxy-tls\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741450 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-multus-certs\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741529 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cni-binary-copy\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741601 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/017d8376-e00b-442b-ac6b-b2189ff75132-multus-daemon-config\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741666 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c20d2bd9-a984-476f-855f-6a0365ccdab7-mcd-auth-proxy-config\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741748 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/017d8376-e00b-442b-ac6b-b2189ff75132-cni-binary-copy\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741818 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-cni-bin\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741896 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-cni-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.741964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-os-release\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.742034 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cnibin\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.742104 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.749737 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.753211 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wlbj9" Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.763478 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4e72faaa64d7d2f6927f1efe05e87b16e6581998f799a139bd6b177c95c6a57d WatchSource:0}: Error finding container 4e72faaa64d7d2f6927f1efe05e87b16e6581998f799a139bd6b177c95c6a57d: Status 404 returned error can't find the container with id 4e72faaa64d7d2f6927f1efe05e87b16e6581998f799a139bd6b177c95c6a57d Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.784236 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a9167a7_c54e_41a0_8c25_71ebb3d7bdc4.slice/crio-07d2275c92859cbeec750d510b4ebc5f96aa4d066198dcc679fd27a398296d54 WatchSource:0}: Error finding container 07d2275c92859cbeec750d510b4ebc5f96aa4d066198dcc679fd27a398296d54: Status 404 returned error can't find the container with id 07d2275c92859cbeec750d510b4ebc5f96aa4d066198dcc679fd27a398296d54 Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.793580 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.803928 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.818652 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.832502 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-system-cni-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843108 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-kubelet\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843138 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w489f\" (UniqueName: \"kubernetes.io/projected/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-kube-api-access-w489f\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843262 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-kubelet\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843375 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-hostroot\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843174 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-hostroot\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843484 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-cnibin\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843521 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4vb\" (UniqueName: \"kubernetes.io/projected/c20d2bd9-a984-476f-855f-6a0365ccdab7-kube-api-access-5t4vb\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843542 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-system-cni-dir\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843565 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-os-release\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843586 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-cnibin\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843627 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-system-cni-dir\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843640 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-socket-dir-parent\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843680 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-os-release\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843683 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-k8s-cni-cncf-io\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843711 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-cni-multus\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843718 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-socket-dir-parent\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843742 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-netns\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843779 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-netns\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843809 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-conf-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843838 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c20d2bd9-a984-476f-855f-6a0365ccdab7-rootfs\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843850 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-conf-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843864 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c20d2bd9-a984-476f-855f-6a0365ccdab7-proxy-tls\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843846 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-cni-multus\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843405 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-system-cni-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843887 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c20d2bd9-a984-476f-855f-6a0365ccdab7-rootfs\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843834 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-k8s-cni-cncf-io\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.843978 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844090 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-multus-certs\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844124 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cni-binary-copy\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844150 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/017d8376-e00b-442b-ac6b-b2189ff75132-multus-daemon-config\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844154 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-run-multus-certs\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844179 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c20d2bd9-a984-476f-855f-6a0365ccdab7-mcd-auth-proxy-config\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844258 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/017d8376-e00b-442b-ac6b-b2189ff75132-cni-binary-copy\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844283 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-cni-bin\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844313 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-cni-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844337 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-os-release\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844356 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cnibin\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844374 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844390 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-etc-kubernetes\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844408 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d5pt\" (UniqueName: \"kubernetes.io/projected/017d8376-e00b-442b-ac6b-b2189ff75132-kube-api-access-4d5pt\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844480 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cnibin\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844561 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-os-release\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844586 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-etc-kubernetes\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-multus-cni-dir\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844387 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/017d8376-e00b-442b-ac6b-b2189ff75132-host-var-lib-cni-bin\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844955 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.844988 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-cni-binary-copy\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.845250 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/017d8376-e00b-442b-ac6b-b2189ff75132-multus-daemon-config\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.845275 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.847313 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/017d8376-e00b-442b-ac6b-b2189ff75132-cni-binary-copy\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.847512 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c20d2bd9-a984-476f-855f-6a0365ccdab7-mcd-auth-proxy-config\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.848725 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c20d2bd9-a984-476f-855f-6a0365ccdab7-proxy-tls\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.860090 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w489f\" (UniqueName: \"kubernetes.io/projected/ff347c3f-89aa-44c3-8cd2-29eea69d6bee-kube-api-access-w489f\") pod \"multus-additional-cni-plugins-7j88d\" (UID: \"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\") " pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.860735 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4vb\" (UniqueName: \"kubernetes.io/projected/c20d2bd9-a984-476f-855f-6a0365ccdab7-kube-api-access-5t4vb\") pod \"machine-config-daemon-8vzgw\" (UID: \"c20d2bd9-a984-476f-855f-6a0365ccdab7\") " pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.862795 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d5pt\" (UniqueName: \"kubernetes.io/projected/017d8376-e00b-442b-ac6b-b2189ff75132-kube-api-access-4d5pt\") pod \"multus-74h7n\" (UID: \"017d8376-e00b-442b-ac6b-b2189ff75132\") " pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.911456 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdwxv"] Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.912641 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.914916 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.914984 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.915074 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.915092 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.917215 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.917223 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.917234 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.919037 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.925922 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7j88d" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.927737 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.935685 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-74h7n" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.939388 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.945803 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.946000 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:34:59.945965416 +0000 UTC m=+22.346550190 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.946101 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.946270 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: E0129 16:34:58.946352 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:34:59.946331917 +0000 UTC m=+22.346916561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.949591 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.954618 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20d2bd9_a984_476f_855f_6a0365ccdab7.slice/crio-b7e00cd2e91b920a3930d45ccbc891674d18f474d3fc702b1b0a1d11dae4a59b WatchSource:0}: Error finding container b7e00cd2e91b920a3930d45ccbc891674d18f474d3fc702b1b0a1d11dae4a59b: Status 404 returned error can't find the container with id b7e00cd2e91b920a3930d45ccbc891674d18f474d3fc702b1b0a1d11dae4a59b Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.963872 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.968206 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017d8376_e00b_442b_ac6b_b2189ff75132.slice/crio-0cd989b6a675b822ee48a24de536a7f7695400375efbf98159f4373b184a6113 WatchSource:0}: Error finding container 0cd989b6a675b822ee48a24de536a7f7695400375efbf98159f4373b184a6113: Status 404 returned error can't find the container with id 0cd989b6a675b822ee48a24de536a7f7695400375efbf98159f4373b184a6113 Jan 29 16:34:58 crc kubenswrapper[4746]: W0129 16:34:58.969500 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff347c3f_89aa_44c3_8cd2_29eea69d6bee.slice/crio-b211cfac6b3d438d8309cdf6e965e9cd0da25c59aab3175ffba73081dd64e80d WatchSource:0}: Error finding container b211cfac6b3d438d8309cdf6e965e9cd0da25c59aab3175ffba73081dd64e80d: Status 404 returned error can't find the container with id b211cfac6b3d438d8309cdf6e965e9cd0da25c59aab3175ffba73081dd64e80d Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.974549 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:58 crc kubenswrapper[4746]: I0129 16:34:58.998895 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.015009 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:52Z\\\",\\\"message\\\":\\\"W0129 16:34:41.626493 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:34:41.626919 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769704481 cert, and key in /tmp/serving-cert-1038773938/serving-signer.crt, /tmp/serving-cert-1038773938/serving-signer.key\\\\nI0129 16:34:42.049915 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:34:42.052376 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:34:42.052580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:42.054429 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1038773938/tls.crt::/tmp/serving-cert-1038773938/tls.key\\\\\\\"\\\\nF0129 16:34:52.327741 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.028735 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047240 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047289 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047310 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-netd\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6sv\" (UniqueName: \"kubernetes.io/projected/50599064-6fa5-43ed-9c1d-a58b3180d421-kube-api-access-ht6sv\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047346 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-log-socket\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-node-log\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047396 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-netns\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047416 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50599064-6fa5-43ed-9c1d-a58b3180d421-ovn-node-metrics-cert\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047436 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047456 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047460 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047491 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047505 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-ovn\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047550 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-config\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047569 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:00.047546673 +0000 UTC m=+22.448131517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047588 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047607 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-env-overrides\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047622 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-systemd\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047636 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-etc-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047651 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-script-lib\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047670 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-slash\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047691 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-bin\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047709 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047735 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-kubelet\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047752 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-systemd-units\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.047769 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-var-lib-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047896 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047906 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047914 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.047955 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:00.047935753 +0000 UTC m=+22.448520397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.048019 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.048139 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:00.048111368 +0000 UTC m=+22.448696192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.050594 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.063972 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.074988 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.088971 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148493 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148553 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-env-overrides\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148571 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-etc-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148613 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-script-lib\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148632 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-slash\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148648 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-systemd\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148662 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-bin\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148697 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148693 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148771 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-kubelet\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148717 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-kubelet\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148818 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-systemd\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148857 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-systemd-units\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148819 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-bin\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148745 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-etc-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148893 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148837 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-systemd-units\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-slash\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148954 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-var-lib-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.148980 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149028 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-netd\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149043 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-var-lib-openvswitch\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149045 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht6sv\" (UniqueName: \"kubernetes.io/projected/50599064-6fa5-43ed-9c1d-a58b3180d421-kube-api-access-ht6sv\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149262 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-env-overrides\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149265 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149081 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-log-socket\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149323 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-netd\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149379 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-log-socket\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149530 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-script-lib\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149631 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-node-log\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149694 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-netns\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149712 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50599064-6fa5-43ed-9c1d-a58b3180d421-ovn-node-metrics-cert\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149775 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-ovn\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149794 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-config\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149812 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-node-log\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149873 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-ovn\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.149897 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-netns\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.150648 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-config\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.155733 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50599064-6fa5-43ed-9c1d-a58b3180d421-ovn-node-metrics-cert\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.165961 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht6sv\" (UniqueName: \"kubernetes.io/projected/50599064-6fa5-43ed-9c1d-a58b3180d421-kube-api-access-ht6sv\") pod \"ovnkube-node-bdwxv\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.236919 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:34:59 crc kubenswrapper[4746]: W0129 16:34:59.252759 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50599064_6fa5_43ed_9c1d_a58b3180d421.slice/crio-d6f2202770ecc2bfb85b955b220c7b4a0e3109b0524801396ec52bb96a1c1141 WatchSource:0}: Error finding container d6f2202770ecc2bfb85b955b220c7b4a0e3109b0524801396ec52bb96a1c1141: Status 404 returned error can't find the container with id d6f2202770ecc2bfb85b955b220c7b4a0e3109b0524801396ec52bb96a1c1141 Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.401317 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:17:03.12054335 +0000 UTC Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.445415 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.445601 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.586131 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.586212 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.586229 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"b7e00cd2e91b920a3930d45ccbc891674d18f474d3fc702b1b0a1d11dae4a59b"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.587938 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" exitCode=0 Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.588026 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.588059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"d6f2202770ecc2bfb85b955b220c7b4a0e3109b0524801396ec52bb96a1c1141"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.589883 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.589918 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d9228427f67e0638b9e5476d563ab79ca5f85fe3da76f9c894884b53bf18c760"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.591808 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerStarted","Data":"121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.591862 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerStarted","Data":"0cd989b6a675b822ee48a24de536a7f7695400375efbf98159f4373b184a6113"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.594105 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wlbj9" event={"ID":"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4","Type":"ContainerStarted","Data":"cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.594154 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wlbj9" event={"ID":"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4","Type":"ContainerStarted","Data":"07d2275c92859cbeec750d510b4ebc5f96aa4d066198dcc679fd27a398296d54"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.596427 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4e72faaa64d7d2f6927f1efe05e87b16e6581998f799a139bd6b177c95c6a57d"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.605460 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.607015 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.607333 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.607353 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"db480e18b137dd98d3ac80321498052ea2d6f89c0382ade1322d0186c122062d"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.610825 4746 generic.go:334] "Generic (PLEG): container finished" podID="ff347c3f-89aa-44c3-8cd2-29eea69d6bee" containerID="ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e" exitCode=0 Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.610897 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerDied","Data":"ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.610924 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerStarted","Data":"b211cfac6b3d438d8309cdf6e965e9cd0da25c59aab3175ffba73081dd64e80d"} Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.618339 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.622249 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.624949 4746 scope.go:117] "RemoveContainer" containerID="8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c" Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.625146 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.634718 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.650036 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.663663 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.674551 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.688637 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.700545 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.728075 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.744874 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2337c64c0635fe83daea7ccc8f64256bbabf443f80c6e23cb49fe314da1a8676\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:52Z\\\",\\\"message\\\":\\\"W0129 16:34:41.626493 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:34:41.626919 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769704481 cert, and key in /tmp/serving-cert-1038773938/serving-signer.crt, /tmp/serving-cert-1038773938/serving-signer.key\\\\nI0129 16:34:42.049915 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:34:42.052376 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:34:42.052580 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:42.054429 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1038773938/tls.crt::/tmp/serving-cert-1038773938/tls.key\\\\\\\"\\\\nF0129 16:34:52.327741 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.759732 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.773144 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.787364 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.805227 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.820503 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.837328 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.851252 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.865885 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.882485 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.893975 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.908568 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.924152 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.940093 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.958912 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:34:59Z is after 2025-08-24T17:21:41Z" Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.959304 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:34:59 crc kubenswrapper[4746]: I0129 16:34:59.959397 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.959499 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:35:01.959463088 +0000 UTC m=+24.360047732 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.959618 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:34:59 crc kubenswrapper[4746]: E0129 16:34:59.960030 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:01.960003442 +0000 UTC m=+24.360588086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.060594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.060686 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.060719 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.060829 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.060876 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.060911 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:02.06088957 +0000 UTC m=+24.461474234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.060921 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.060942 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.060876 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.061028 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:02.060999833 +0000 UTC m=+24.461584667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.061038 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.061051 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.061083 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:02.061073675 +0000 UTC m=+24.461658319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.377123 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6rl2h"] Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.378165 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.379958 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.379992 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.381099 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.385982 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.401555 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.401846 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:17:36.042108482 +0000 UTC Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.427846 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.443436 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.445676 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.445720 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.445810 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:00 crc kubenswrapper[4746]: E0129 16:35:00.446164 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.450259 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.451242 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.451960 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.457325 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.464575 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-host\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.464642 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-serviceca\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.464671 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvdr\" (UniqueName: \"kubernetes.io/projected/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-kube-api-access-srvdr\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.471627 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.484564 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.497604 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.509382 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.524360 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.546626 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.558696 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.565600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-serviceca\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.565718 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srvdr\" (UniqueName: \"kubernetes.io/projected/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-kube-api-access-srvdr\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.565904 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-host\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.565836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-host\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.567368 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-serviceca\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.575541 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.591005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srvdr\" (UniqueName: \"kubernetes.io/projected/ae29a6fb-63c0-4daf-8710-c11c2532e5f8-kube-api-access-srvdr\") pod \"node-ca-6rl2h\" (UID: \"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\") " pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.596747 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.629546 4746 generic.go:334] "Generic (PLEG): container finished" podID="ff347c3f-89aa-44c3-8cd2-29eea69d6bee" containerID="78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7" exitCode=0 Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.629609 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerDied","Data":"78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.637440 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.637479 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.637491 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.637501 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.637510 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.637519 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.643714 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.661248 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.670978 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.685454 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.702883 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.721668 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.735625 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.752482 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.775165 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.792795 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.804953 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.822611 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.859720 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:00Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:00 crc kubenswrapper[4746]: I0129 16:35:00.868435 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6rl2h" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.402266 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:56:35.666049323 +0000 UTC Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.444878 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:01 crc kubenswrapper[4746]: E0129 16:35:01.445056 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.643468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6rl2h" event={"ID":"ae29a6fb-63c0-4daf-8710-c11c2532e5f8","Type":"ContainerStarted","Data":"c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f"} Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.643558 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6rl2h" event={"ID":"ae29a6fb-63c0-4daf-8710-c11c2532e5f8","Type":"ContainerStarted","Data":"ad9590ec94bb8111ebfa539f31a184b8112c9ba35c7fd6efb9d2dca3722992f0"} Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.646054 4746 generic.go:334] "Generic (PLEG): container finished" podID="ff347c3f-89aa-44c3-8cd2-29eea69d6bee" containerID="ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20" exitCode=0 Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.646219 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerDied","Data":"ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20"} Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.665009 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.691312 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.709355 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.724290 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.736644 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.751272 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.765337 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.777022 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.797567 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.809417 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.824355 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.838053 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.853386 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.864811 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.877303 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.894288 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.907401 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.920925 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.936864 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.956562 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.972116 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.981576 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:35:01 crc kubenswrapper[4746]: E0129 16:35:01.981800 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:35:05.981755633 +0000 UTC m=+28.382340307 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.981892 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:01 crc kubenswrapper[4746]: E0129 16:35:01.982032 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:01 crc kubenswrapper[4746]: E0129 16:35:01.982105 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:05.982082352 +0000 UTC m=+28.382667176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.987003 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:01 crc kubenswrapper[4746]: I0129 16:35:01.999310 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.012099 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.024711 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.037823 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.083604 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.083662 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.083718 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083784 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083815 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083835 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083903 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083920 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083929 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.084054 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.083911 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:06.083889515 +0000 UTC m=+28.484474159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.084138 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:06.084111471 +0000 UTC m=+28.484696125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.084156 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:06.084146532 +0000 UTC m=+28.484731196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.403522 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:27:04.157818958 +0000 UTC Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.445370 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.445370 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.445606 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:02 crc kubenswrapper[4746]: E0129 16:35:02.445687 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.656113 4746 generic.go:334] "Generic (PLEG): container finished" podID="ff347c3f-89aa-44c3-8cd2-29eea69d6bee" containerID="695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb" exitCode=0 Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.656278 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerDied","Data":"695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb"} Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.666874 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.668781 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140"} Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.678621 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.701617 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.716077 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.731773 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.756888 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.775445 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.791804 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.811691 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.829875 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.844281 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.858632 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.873443 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.890842 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.907206 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.923620 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.938502 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.958310 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.977985 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:02 crc kubenswrapper[4746]: I0129 16:35:02.996923 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:02Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.009331 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.024571 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.038382 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.053797 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.072716 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.105452 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.121651 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.198113 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.215871 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.216726 4746 scope.go:117] "RemoveContainer" containerID="8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c" Jan 29 16:35:03 crc kubenswrapper[4746]: E0129 16:35:03.217011 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.404335 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:22:10.472788733 +0000 UTC Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.444934 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:03 crc kubenswrapper[4746]: E0129 16:35:03.445093 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.696636 4746 generic.go:334] "Generic (PLEG): container finished" podID="ff347c3f-89aa-44c3-8cd2-29eea69d6bee" containerID="f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8" exitCode=0 Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.696770 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerDied","Data":"f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8"} Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.734065 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.757146 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.782064 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.807966 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.821651 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.836285 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.847164 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.863569 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.878281 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.889259 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.903082 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.923891 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:03 crc kubenswrapper[4746]: I0129 16:35:03.939350 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:03Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.012678 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.014903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.014945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.014956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.015063 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.023390 4746 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.023653 4746 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.024844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.024889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.024899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.024920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.024932 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.038562 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.043494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.043537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.043549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.043571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.043584 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.058923 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.064666 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.064724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.064734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.064824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.064839 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.082883 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.091383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.092476 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.092532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.092576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.092598 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.109540 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.114491 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.114536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.114549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.114571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.114586 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.125967 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.126088 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.128511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.128560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.128575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.128595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.128609 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.232008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.232091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.232118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.232156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.232180 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.335156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.335243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.335257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.335278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.335330 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.405359 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:46:30.244154337 +0000 UTC Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.438180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.438278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.438300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.438331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.438354 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.445528 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.445550 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.445653 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:04 crc kubenswrapper[4746]: E0129 16:35:04.445793 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.541339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.541396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.541409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.541436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.541449 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.644028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.644060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.644071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.644087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.644097 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.680278 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.686269 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.702045 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.702894 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.709473 4746 generic.go:334] "Generic (PLEG): container finished" podID="ff347c3f-89aa-44c3-8cd2-29eea69d6bee" containerID="042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb" exitCode=0 Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.709517 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerDied","Data":"042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.720073 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.737134 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.746284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.746360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.746374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.746399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.746412 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.755161 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.771002 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.785916 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.807027 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.830987 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.849656 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.849717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.849736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.849767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.849790 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.871418 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.884897 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.902584 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.918871 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.935343 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.953823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.953897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.953918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.953954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.953976 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:04Z","lastTransitionTime":"2026-01-29T16:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.954758 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.966986 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:04 crc kubenswrapper[4746]: I0129 16:35:04.985483 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.003118 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.019822 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.034917 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.056279 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.057568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.057620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.057629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.057649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.057660 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.067427 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.083620 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.100254 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.118414 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.133632 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.158988 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.160027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.160132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.160210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.160282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.160344 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.174020 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.263331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.263369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.263378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.263399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.263414 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.366155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.366221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.366251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.366273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.366286 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.406512 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:03:11.508821069 +0000 UTC Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.445057 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:05 crc kubenswrapper[4746]: E0129 16:35:05.445258 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.470732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.470773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.470784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.470801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.470812 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.574033 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.574085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.574102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.574125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.574138 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.676732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.676764 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.676772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.676788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.676798 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.718866 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" event={"ID":"ff347c3f-89aa-44c3-8cd2-29eea69d6bee","Type":"ContainerStarted","Data":"5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.724648 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.745977 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.762009 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.779847 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.779917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.779942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.779972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.779992 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.781487 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.802799 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.821407 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.842453 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.857289 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.872295 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.884292 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.884347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.884368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.884403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.884429 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.891009 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.909236 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.923926 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.947574 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.966315 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.988179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.988309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.988342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.988377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.988400 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:05Z","lastTransitionTime":"2026-01-29T16:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:05 crc kubenswrapper[4746]: I0129 16:35:05.995446 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:05Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.018229 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.031445 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.031718 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:35:14.031681279 +0000 UTC m=+36.432265933 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.031772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.031981 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.032126 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:14.03208594 +0000 UTC m=+36.432670594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.037087 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.057430 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.081446 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.091335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.091379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.091392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.091413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.091426 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.102784 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.117852 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.132925 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.132982 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.133020 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133175 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133257 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133340 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133350 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:14.133317367 +0000 UTC m=+36.533902051 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133234 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133390 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133410 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133362 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133488 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:14.13345075 +0000 UTC m=+36.534035394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.133558 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:14.133525462 +0000 UTC m=+36.534110116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.135677 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.154865 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.176241 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.178455 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.195001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.195063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.195079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.195107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.195125 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.208972 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.222500 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.242278 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.260490 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.278575 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.298935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.299003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.299020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.299050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.299070 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.402838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.402906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.402923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.402951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.402968 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.407107 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:16:44.602614152 +0000 UTC Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.445060 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.445167 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.445295 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:06 crc kubenswrapper[4746]: E0129 16:35:06.445366 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.505986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.506031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.506040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.506074 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.506088 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.608824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.608878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.608889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.608910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.608939 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.712581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.712671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.712698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.712734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.712760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.728329 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.729044 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.729127 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.765430 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.765580 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.789263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.812675 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.815678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.815739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.815759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.815786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.815805 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.833145 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.848283 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.866308 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.886987 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.915902 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.928365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.928477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.928501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.928541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.928575 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:06Z","lastTransitionTime":"2026-01-29T16:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.941666 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.958374 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.980370 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:06 crc kubenswrapper[4746]: I0129 16:35:06.996034 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.015479 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.032357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.032453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.032471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.032508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.032525 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.033658 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.048139 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.064886 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.080867 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.096033 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.118776 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.135175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.135256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.135269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.135293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.135326 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.136784 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.149121 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.160803 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.175834 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.191506 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.213272 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.225576 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.237374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.237442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.237457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.237479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.237497 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.244891 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.258941 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.277697 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:07Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.340810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.340874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.340895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.340921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.340942 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.408147 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:06:01.741287821 +0000 UTC Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.444618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.444677 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.444682 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.444695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: E0129 16:35:07.444923 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.444945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.444982 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.551163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.551269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.551289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.551387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.551408 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.618650 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.655211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.655276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.655293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.655321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.655377 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.732743 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.758668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.758721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.758733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.758756 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.758774 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.861647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.861714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.861733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.861759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.861779 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.964577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.964632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.964649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.964670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:07 crc kubenswrapper[4746]: I0129 16:35:07.964685 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:07Z","lastTransitionTime":"2026-01-29T16:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.067503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.067559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.067570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.067590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.067604 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.169920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.169957 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.170138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.170466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.170483 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.221785 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.277748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.278116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.278346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.278524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.279073 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.382168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.382258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.382269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.382289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.382305 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.408615 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:59:10.903980929 +0000 UTC Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.445213 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.445273 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:08 crc kubenswrapper[4746]: E0129 16:35:08.445393 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:08 crc kubenswrapper[4746]: E0129 16:35:08.445577 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.467431 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.485614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.485666 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.485677 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.485694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.485707 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.490778 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.509794 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.534163 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.554376 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.577428 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.588652 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.588694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.588707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.588729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.588743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.592179 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.605886 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.622309 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.645640 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.662317 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.680121 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.692928 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.692986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.693003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.693023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.693038 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.699776 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.712245 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.738801 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/0.log" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.741825 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef" exitCode=1 Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.741884 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.742644 4746 scope.go:117] "RemoveContainer" containerID="1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.758841 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.772875 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.789025 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.797642 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.797897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.798011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.798206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.798244 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.808709 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.823539 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.836594 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.853001 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.869990 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.886156 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.901491 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.901545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.901570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.901602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.901628 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:08Z","lastTransitionTime":"2026-01-29T16:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.923504 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:08Z\\\",\\\"message\\\":\\\"51 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:08.463572 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:35:08.463582 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:08.463616 6051 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:35:08.463641 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:35:08.463647 6051 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463661 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:35:08.463739 6051 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463861 6051 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464133 6051 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464658 6051 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.942080 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.962465 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:08 crc kubenswrapper[4746]: I0129 16:35:08.986956 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.004379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.004428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.004439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.004464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.004477 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.009812 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.107243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.107309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.107324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.107351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.107372 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.213453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.213539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.213566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.213599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.213628 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.315872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.315921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.315933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.315953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.315966 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.409601 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:32:53.724000448 +0000 UTC Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.419132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.419203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.419220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.419240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.419252 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.444866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:09 crc kubenswrapper[4746]: E0129 16:35:09.445057 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.521775 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.521816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.521828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.521850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.521861 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.624933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.624980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.624990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.625010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.625022 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.728214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.728271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.728288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.728321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.728338 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.749083 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/0.log" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.754451 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.754660 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.779960 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.795291 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.816718 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.832264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.832308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.832318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.832337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.832348 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.837139 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.851179 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.872733 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.885764 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.906354 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.921530 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.933108 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.935275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.935333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.935349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.935370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.935388 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:09Z","lastTransitionTime":"2026-01-29T16:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.952286 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.972222 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:09 crc kubenswrapper[4746]: I0129 16:35:09.997569 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:08Z\\\",\\\"message\\\":\\\"51 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:08.463572 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:35:08.463582 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:08.463616 6051 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:35:08.463641 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:35:08.463647 6051 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463661 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:35:08.463739 6051 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463861 6051 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464133 6051 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464658 6051 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:09Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.013597 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.038337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.038410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.038429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.038458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.038478 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.142238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.142301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.142327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.142358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.142381 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.245089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.245146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.245162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.245224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.245261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.347751 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.347834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.347867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.347897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.347917 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.410322 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:28:59.74903126 +0000 UTC Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.445692 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.445773 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:10 crc kubenswrapper[4746]: E0129 16:35:10.445907 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:10 crc kubenswrapper[4746]: E0129 16:35:10.446067 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.450425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.450498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.450521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.450551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.450570 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.553454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.553498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.553510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.553530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.553543 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.656208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.656251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.656259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.656279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.656294 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.759481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.759526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.759540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.759556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.759568 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.761128 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/1.log" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.762101 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/0.log" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.766162 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac" exitCode=1 Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.766229 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.766270 4746 scope.go:117] "RemoveContainer" containerID="1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.766872 4746 scope.go:117] "RemoveContainer" containerID="569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac" Jan 29 16:35:10 crc kubenswrapper[4746]: E0129 16:35:10.767039 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.782358 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.799849 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.830497 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.851607 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.862530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.862576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.862590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.862613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.862630 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.868163 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.885629 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.907089 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.938466 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:08Z\\\",\\\"message\\\":\\\"51 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:08.463572 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:35:08.463582 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:08.463616 6051 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:35:08.463641 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:35:08.463647 6051 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463661 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:35:08.463739 6051 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463861 6051 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464133 6051 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464658 6051 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.938826 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2"] Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.940005 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.942407 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.943347 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.962478 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.965579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.965647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.965667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.965707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.965727 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:10Z","lastTransitionTime":"2026-01-29T16:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.979828 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:10 crc kubenswrapper[4746]: I0129 16:35:10.996589 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:10Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.016104 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.032118 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.044809 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.060259 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.068672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.068709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.068720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.068741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.068754 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.077818 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.087344 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrx5\" (UniqueName: \"kubernetes.io/projected/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-kube-api-access-vtrx5\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.087435 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.087471 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.087578 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.088605 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.102251 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.120384 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.133923 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.154988 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba4fc1c079d1b83fca02f26cfab8ff906d1f09ee7b788e884d392fb451e70ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:08Z\\\",\\\"message\\\":\\\"51 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:08.463572 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:35:08.463582 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:08.463616 6051 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:35:08.463641 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 16:35:08.463647 6051 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463661 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 16:35:08.463739 6051 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:35:08.463861 6051 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464133 6051 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 16:35:08.464658 6051 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.166819 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.170700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.170731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.170740 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.170777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.170787 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.181788 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.189153 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.189419 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.189501 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.189628 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtrx5\" (UniqueName: \"kubernetes.io/projected/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-kube-api-access-vtrx5\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.190086 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.190388 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.194529 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.201757 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.208563 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtrx5\" (UniqueName: \"kubernetes.io/projected/5d211bc9-9005-4fe1-9d35-66e3d94cfc3b-kube-api-access-vtrx5\") pod \"ovnkube-control-plane-749d76644c-wlqq2\" (UID: \"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.216847 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.233461 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.246472 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.258606 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.260779 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.271847 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.273775 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.273810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.273822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.273838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.273848 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: W0129 16:35:11.280762 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d211bc9_9005_4fe1_9d35_66e3d94cfc3b.slice/crio-e9a7170bc05b8883ff5b7474b9366707c8a035533ad4347fa81881ac7a734ef9 WatchSource:0}: Error finding container e9a7170bc05b8883ff5b7474b9366707c8a035533ad4347fa81881ac7a734ef9: Status 404 returned error can't find the container with id e9a7170bc05b8883ff5b7474b9366707c8a035533ad4347fa81881ac7a734ef9 Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.376632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.376929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.376942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.376963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.376977 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.411433 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:27:36.480996265 +0000 UTC Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.445058 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:11 crc kubenswrapper[4746]: E0129 16:35:11.445244 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.479567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.479623 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.479643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.479672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.479691 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.582816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.582878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.582899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.582954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.582972 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.686605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.686665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.686682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.686705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.686723 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.772451 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/1.log" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.777958 4746 scope.go:117] "RemoveContainer" containerID="569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac" Jan 29 16:35:11 crc kubenswrapper[4746]: E0129 16:35:11.778637 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.778839 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" event={"ID":"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b","Type":"ContainerStarted","Data":"b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.779215 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" event={"ID":"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b","Type":"ContainerStarted","Data":"0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.779444 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" event={"ID":"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b","Type":"ContainerStarted","Data":"e9a7170bc05b8883ff5b7474b9366707c8a035533ad4347fa81881ac7a734ef9"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.789892 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.789931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.789940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.789958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.789969 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.796654 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.813012 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.830143 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.847471 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.859675 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.873003 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.888035 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.892969 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.893043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.893061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.893161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.893177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.906053 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.920012 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.940507 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.952754 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.965257 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.978058 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.995852 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:11Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.996693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.996758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.996772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.996798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:11 crc kubenswrapper[4746]: I0129 16:35:11.996814 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:11Z","lastTransitionTime":"2026-01-29T16:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.009425 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.026015 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.040264 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.052418 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.065645 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.081821 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.099413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.099859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.099909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.099934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.099948 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.100470 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.122423 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.134219 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.148235 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.168292 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.185262 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.202401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.202453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.202477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.202510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.202529 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.203808 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.220151 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.233508 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.245390 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.306304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.306357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.306370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.306391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.306405 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.409861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.409931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.409951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.409983 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.410003 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.413019 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:38:24.447035773 +0000 UTC Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.445768 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.445833 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:12 crc kubenswrapper[4746]: E0129 16:35:12.446016 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:12 crc kubenswrapper[4746]: E0129 16:35:12.446174 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.454899 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-f72wn"] Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.455923 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:12 crc kubenswrapper[4746]: E0129 16:35:12.456048 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.474149 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.495669 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.514278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.514337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.514350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.514375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.514388 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.514621 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.531915 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.552398 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.581126 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.602422 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.604781 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr7ps\" (UniqueName: \"kubernetes.io/projected/ed3cddee-6243-41b8-9ac3-7ef6772d2960-kube-api-access-pr7ps\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.604887 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.618137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.618375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.618538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.618687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.618817 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.624416 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.644804 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.663639 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.684687 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.702246 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.705615 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr7ps\" (UniqueName: \"kubernetes.io/projected/ed3cddee-6243-41b8-9ac3-7ef6772d2960-kube-api-access-pr7ps\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.705924 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:12 crc kubenswrapper[4746]: E0129 16:35:12.706243 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:12 crc kubenswrapper[4746]: E0129 16:35:12.706374 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:13.206338247 +0000 UTC m=+35.606922941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.718791 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.722293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.722345 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.722362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.722389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.722407 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.730739 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr7ps\" (UniqueName: \"kubernetes.io/projected/ed3cddee-6243-41b8-9ac3-7ef6772d2960-kube-api-access-pr7ps\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.738280 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.762722 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.779059 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:12Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.825983 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.826042 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.826059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.826090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.826113 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.929773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.929844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.929889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.929917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:12 crc kubenswrapper[4746]: I0129 16:35:12.929936 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:12Z","lastTransitionTime":"2026-01-29T16:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.032749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.032806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.032826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.032850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.032866 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.135970 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.136077 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.136095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.136123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.136143 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.216807 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:13 crc kubenswrapper[4746]: E0129 16:35:13.217402 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:13 crc kubenswrapper[4746]: E0129 16:35:13.217570 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:14.217526749 +0000 UTC m=+36.618111433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.239973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.240049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.240066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.240089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.240104 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.343274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.343336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.343352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.343378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.343395 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.414227 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:53:11.12729753 +0000 UTC Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.445364 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:13 crc kubenswrapper[4746]: E0129 16:35:13.445579 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.447836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.447890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.447914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.447945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.447969 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.550958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.551011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.551026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.551046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.551058 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.653947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.653996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.654008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.654030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.654044 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.757434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.757492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.757529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.757563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.757582 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.861854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.861918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.861931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.861959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.861976 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.965602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.965685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.965705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.965738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:13 crc kubenswrapper[4746]: I0129 16:35:13.965759 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:13Z","lastTransitionTime":"2026-01-29T16:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.070019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.070081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.070099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.070126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.070146 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.127324 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.127582 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:35:30.127533642 +0000 UTC m=+52.528118316 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.127678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.127907 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.128026 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:30.127993175 +0000 UTC m=+52.528577859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.173861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.173918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.173935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.173956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.173976 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.229528 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.229628 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.229686 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.229726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229832 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229872 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229896 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229942 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229942 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229983 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229991 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:16.229963061 +0000 UTC m=+38.630547715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.230008 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.230037 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:30.230002772 +0000 UTC m=+52.630587446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.229881 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.230074 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:30.230050524 +0000 UTC m=+52.630635208 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.230102 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:30.230087115 +0000 UTC m=+52.630671799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.277962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.278038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.278059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.278092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.278117 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.313028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.313095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.313112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.313140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.313159 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.333833 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.339470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.339575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.339606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.339638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.339665 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.359827 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.365550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.365603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.365620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.365644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.365656 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.386572 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.392024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.392102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.392127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.392162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.392260 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.412096 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.414899 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:18:15.861730488 +0000 UTC Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.417556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.417624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.417643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.417673 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.417698 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.440055 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.440373 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.443647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.443715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.443729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.443749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.443763 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.444826 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.444880 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.444910 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.444977 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.445076 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:14 crc kubenswrapper[4746]: E0129 16:35:14.445282 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.546895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.546972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.546991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.547029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.547048 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.650754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.650824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.650845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.650874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.650895 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.755147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.755224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.755239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.755259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.755306 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.859701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.859774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.859800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.859831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.859854 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.963110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.963162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.963170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.963208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:14 crc kubenswrapper[4746]: I0129 16:35:14.963223 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:14Z","lastTransitionTime":"2026-01-29T16:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.066771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.066855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.066878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.066912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.066931 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.169747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.169823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.169840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.169865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.169883 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.272591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.272633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.272645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.272664 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.272678 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.374995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.375070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.375092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.375124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.375148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.415490 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:42:32.128145493 +0000 UTC Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.445145 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:15 crc kubenswrapper[4746]: E0129 16:35:15.445364 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.479203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.479246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.479255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.479273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.479283 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.581571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.581655 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.581676 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.581706 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.581727 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.685341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.685415 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.685433 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.685463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.685482 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.788931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.789018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.789044 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.789095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.789126 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.891714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.891785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.891804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.891830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.891849 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.994852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.994904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.994915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.994937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:15 crc kubenswrapper[4746]: I0129 16:35:15.994951 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:15Z","lastTransitionTime":"2026-01-29T16:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.098143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.098280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.098310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.098340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.098360 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.201635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.201720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.201750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.201783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.201803 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.255151 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:16 crc kubenswrapper[4746]: E0129 16:35:16.255513 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:16 crc kubenswrapper[4746]: E0129 16:35:16.255722 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:20.25567853 +0000 UTC m=+42.656263214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.304926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.304998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.305020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.305052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.305074 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.409086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.409162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.409178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.409241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.409261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.416104 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:12:56.058147108 +0000 UTC Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.444760 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.444909 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.444909 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:16 crc kubenswrapper[4746]: E0129 16:35:16.445085 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:16 crc kubenswrapper[4746]: E0129 16:35:16.445216 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:16 crc kubenswrapper[4746]: E0129 16:35:16.445509 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.513260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.513350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.513367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.513392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.513411 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.616784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.616826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.616846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.616869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.616884 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.720977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.721057 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.721071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.721093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.721109 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.824439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.824530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.824556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.824590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.824619 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.929857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.929944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.929968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.930002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:16 crc kubenswrapper[4746]: I0129 16:35:16.930035 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:16Z","lastTransitionTime":"2026-01-29T16:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.034172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.034247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.034260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.034280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.034292 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.137558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.137628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.137647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.137676 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.137739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.241251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.241313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.241325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.241341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.241391 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.344770 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.344809 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.344822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.344844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.344858 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.417016 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:40:06.481033792 +0000 UTC Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.446654 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:17 crc kubenswrapper[4746]: E0129 16:35:17.447127 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.447466 4746 scope.go:117] "RemoveContainer" containerID="8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.447567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.447621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.447638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.447663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.447681 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.551176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.551240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.551254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.551278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.551291 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.654438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.654483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.654496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.654513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.654526 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.758142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.758668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.758693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.758720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.758740 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.805712 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.808451 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.809033 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.829053 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.853631 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.861380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.861458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.861484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.861511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.861530 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.875444 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.892825 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.905493 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.919506 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.945772 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.965425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.965470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.965483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.965503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.965513 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:17Z","lastTransitionTime":"2026-01-29T16:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.968901 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:17 crc kubenswrapper[4746]: I0129 16:35:17.990747 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.021786 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.040852 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.060085 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.068064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.068142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.068170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.068239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.068271 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.075149 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.093123 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.113678 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.134138 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.171471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.171559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.171581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.171614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.171632 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.275013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.275064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.275074 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.275094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.275111 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.378733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.378817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.378835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.378864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.378881 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.417805 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:54:57.664640762 +0000 UTC Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.445296 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.445363 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:18 crc kubenswrapper[4746]: E0129 16:35:18.445454 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.445309 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:18 crc kubenswrapper[4746]: E0129 16:35:18.445591 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:18 crc kubenswrapper[4746]: E0129 16:35:18.445744 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.462675 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.482532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.482582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.482596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.482616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.482631 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.484347 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.498777 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.515578 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.539703 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.554647 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.566385 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.579375 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.585918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.585961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.585973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.585992 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.586006 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.594555 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.609033 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.622771 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.640694 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.653235 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.668275 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.683160 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.688848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.688938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.688955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.689011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.689091 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.701344 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.793413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.793488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.793512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.793547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.793571 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.896965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.897012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.897023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.897041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:18 crc kubenswrapper[4746]: I0129 16:35:18.897053 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:18Z","lastTransitionTime":"2026-01-29T16:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.000304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.000339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.000350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.000367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.000380 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.102722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.102796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.102819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.102850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.102872 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.205694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.205735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.205744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.205781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.205793 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.308845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.308922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.308944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.308995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.309030 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.412531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.412587 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.412604 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.412632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.412651 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.418772 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:59:14.075680826 +0000 UTC Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.445218 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:19 crc kubenswrapper[4746]: E0129 16:35:19.445481 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.515654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.515718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.515728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.515749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.515781 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.619533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.619608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.619625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.619656 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.619676 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.722175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.722295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.722320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.722355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.722382 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.825217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.825275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.825292 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.825319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.825335 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.928863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.928942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.928975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.929008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:19 crc kubenswrapper[4746]: I0129 16:35:19.929031 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:19Z","lastTransitionTime":"2026-01-29T16:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.032061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.032126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.032149 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.032179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.032237 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.135478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.135599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.135623 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.135659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.135680 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.238882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.238951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.238969 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.238997 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.239018 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.313018 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:20 crc kubenswrapper[4746]: E0129 16:35:20.313356 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:20 crc kubenswrapper[4746]: E0129 16:35:20.313534 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:28.313486681 +0000 UTC m=+50.714071355 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.342972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.343057 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.343079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.343113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.343134 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.419014 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:38:56.073040257 +0000 UTC Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.444771 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.444845 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.444799 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:20 crc kubenswrapper[4746]: E0129 16:35:20.444991 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:20 crc kubenswrapper[4746]: E0129 16:35:20.445086 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:20 crc kubenswrapper[4746]: E0129 16:35:20.445358 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.446910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.446965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.446982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.447004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.447026 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.550509 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.550580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.550599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.550628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.550651 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.653659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.653843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.653866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.653893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.653916 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.756545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.756603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.756621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.756647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.756665 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.859572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.859616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.859632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.859657 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.859675 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.961910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.961969 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.961978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.962015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:20 crc kubenswrapper[4746]: I0129 16:35:20.962026 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:20Z","lastTransitionTime":"2026-01-29T16:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.065172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.065241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.065250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.065271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.065299 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.168260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.168319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.168336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.168362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.168379 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.271219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.271508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.271599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.271698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.271789 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.375078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.375514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.375727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.375866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.376052 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.419556 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:01:38.920906685 +0000 UTC Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.445457 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:21 crc kubenswrapper[4746]: E0129 16:35:21.445931 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.479626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.479683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.479700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.479727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.479745 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.583904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.583954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.583971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.583996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.584014 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.687942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.688008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.688028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.688055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.688073 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.791991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.792060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.792080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.792107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.792126 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.896099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.896158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.896173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.896220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.896238 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.999056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.999094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.999102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.999117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:21 crc kubenswrapper[4746]: I0129 16:35:21.999135 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:21Z","lastTransitionTime":"2026-01-29T16:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.102751 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.102813 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.102832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.102862 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.102882 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.206007 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.206078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.206094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.206122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.206140 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.309853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.309929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.309948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.309979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.310003 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.414072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.414155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.414179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.414295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.414321 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.420270 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:56:45.154992845 +0000 UTC Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.445056 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.445104 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.445078 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:22 crc kubenswrapper[4746]: E0129 16:35:22.445338 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:22 crc kubenswrapper[4746]: E0129 16:35:22.445458 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:22 crc kubenswrapper[4746]: E0129 16:35:22.445646 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.517463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.517513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.517533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.517564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.517598 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.568715 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.570088 4746 scope.go:117] "RemoveContainer" containerID="569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.620752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.620806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.620821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.620843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.620858 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.723160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.723235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.723249 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.723270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.723285 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.826010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.826063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.826073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.826093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.826106 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.829225 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/1.log" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.831296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.831665 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.848356 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.867706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.879876 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.892121 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.904422 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.926739 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.928664 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.928750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.928762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.928782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.928796 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:22Z","lastTransitionTime":"2026-01-29T16:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.941426 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.958475 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.980220 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:22 crc kubenswrapper[4746]: I0129 16:35:22.999122 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.014923 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.031053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.031099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.031108 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.031127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.031137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.040309 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.055935 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.067095 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.078519 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.094234 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.134344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.134391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.134402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.134420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.134431 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.238786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.238851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.238866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.238894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.238915 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.342274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.342331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.342343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.342367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.342380 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.421255 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:41:18.512493965 +0000 UTC Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.445203 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:23 crc kubenswrapper[4746]: E0129 16:35:23.445399 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.445710 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.445775 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.445793 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.445826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.445843 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.548799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.548837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.548845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.548860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.548870 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.651845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.651893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.651901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.651923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.651934 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.754857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.754917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.754934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.754962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.754982 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.838593 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/2.log" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.839826 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/1.log" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.844821 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4" exitCode=1 Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.844877 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.844930 4746 scope.go:117] "RemoveContainer" containerID="569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.846451 4746 scope.go:117] "RemoveContainer" containerID="2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4" Jan 29 16:35:23 crc kubenswrapper[4746]: E0129 16:35:23.847095 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.857793 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.857828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.857840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.857858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.857872 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.873844 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.896903 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.923744 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.949088 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.961001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.961084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.961108 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.961141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.961159 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:23Z","lastTransitionTime":"2026-01-29T16:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.970542 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:23 crc kubenswrapper[4746]: I0129 16:35:23.990415 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:23Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.009085 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.026911 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.038637 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.058430 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.063909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.063967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.063985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.064009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.064025 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.077988 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569d300d5423386f4cefab9495d7969926257677662a74762ffb23d6bcd12fac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:09Z\\\",\\\"message\\\":\\\"r.go:443] Built service openshift-machine-api/cluster-autoscaler-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.245\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9192, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 16:35:09.835351 6189 services_controller.go:444] Built service openshift-machine-api/cluster-autoscaler-operator LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835360 6189 services_controller.go:445] Built service openshift-machine-api/cluster-autoscaler-operator LB template configs for network=default: []services.lbConfig(nil)\\\\nI0129 16:35:09.835384 6189 services_controller.go:451] Built service openshift-machine-api/cluster-autoscaler-operator cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.089462 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.102493 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.114310 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.126017 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.140108 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.166887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.166926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.166937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.166959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.166973 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.270047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.270121 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.270141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.270245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.270277 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.374104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.374160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.374179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.374250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.374275 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.421881 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:15:42.70118802 +0000 UTC Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.445734 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.445753 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.445948 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.446092 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.446332 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.446539 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.477409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.477466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.477478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.477497 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.477507 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.579994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.580046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.580063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.580087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.580105 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.683309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.683356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.683367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.683387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.683399 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.786548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.786635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.786658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.786698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.786723 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.830532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.830567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.830577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.830595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.830608 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.843431 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.848754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.848798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.848807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.848828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.848838 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.850418 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/2.log" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.855381 4746 scope.go:117] "RemoveContainer" containerID="2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.855682 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.867518 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.870341 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.872008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.872050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.872063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.872084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.872099 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.883418 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.883679 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.888232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.888280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.888299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.888327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.888345 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.897145 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.908098 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.910474 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.913179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.913219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.913229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.913248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.913266 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.923680 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.924578 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: E0129 16:35:24.924676 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.926474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.926520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.926532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.926552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.926565 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:24Z","lastTransitionTime":"2026-01-29T16:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.939320 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.952997 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.967632 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:24 crc kubenswrapper[4746]: I0129 16:35:24.980390 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.000100 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:24Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.013817 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.030151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.030467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.030603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.030730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.030842 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.031962 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.055020 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.066952 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.080002 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.090768 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.134033 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.134090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.134107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.134137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.134156 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.237424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.237456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.237464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.237478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.237487 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.339965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.340005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.340016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.340038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.340048 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.422405 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:02:56.450876083 +0000 UTC Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.442526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.442570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.442582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.442600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.442611 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.445102 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:25 crc kubenswrapper[4746]: E0129 16:35:25.445345 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.545324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.545378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.545391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.545412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.545425 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.649350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.649429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.649463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.649495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.649515 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.756364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.756442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.756455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.756475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.756491 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.858790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.858840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.858855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.858876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.858887 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.961753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.961791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.961808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.961827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:25 crc kubenswrapper[4746]: I0129 16:35:25.961840 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:25Z","lastTransitionTime":"2026-01-29T16:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.064798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.064862 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.064886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.064925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.064948 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.167879 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.167962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.167984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.168018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.168042 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.271451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.271747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.271800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.271835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.271857 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.375319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.375399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.375423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.375458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.375518 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.423354 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:09:15.191230567 +0000 UTC Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.440109 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.444980 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.445152 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.445332 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:26 crc kubenswrapper[4746]: E0129 16:35:26.445238 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:26 crc kubenswrapper[4746]: E0129 16:35:26.445596 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:26 crc kubenswrapper[4746]: E0129 16:35:26.445733 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.453554 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.469286 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.479022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.479129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.479314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.479349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.479371 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.496706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.513463 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.533228 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.551427 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.572044 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.582071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.582237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.582266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.582344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.582372 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.592157 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.621627 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.635621 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.653927 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.673509 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.685912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.685958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.685969 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.685988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.686000 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.695265 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.713552 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.730181 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.749978 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.767263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:26Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.789147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.789250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.789270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.789299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.789320 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.892787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.892904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.892932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.892964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.892989 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.996448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.996505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.996523 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.996547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:26 crc kubenswrapper[4746]: I0129 16:35:26.996564 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:26Z","lastTransitionTime":"2026-01-29T16:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.099668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.099724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.099771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.099797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.099816 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.202823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.202883 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.202901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.202931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.202949 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.305663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.305765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.305824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.305851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.305902 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.409050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.409100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.409118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.409137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.409150 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.423907 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:20:52.616324811 +0000 UTC Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.445356 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:27 crc kubenswrapper[4746]: E0129 16:35:27.445603 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.511573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.511655 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.511667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.511683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.511692 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.614353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.614414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.614442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.614472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.614490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.717700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.717753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.717769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.717792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.717811 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.821236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.821296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.821311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.821333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.821348 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.923509 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.923579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.923600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.923626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:27 crc kubenswrapper[4746]: I0129 16:35:27.923643 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:27Z","lastTransitionTime":"2026-01-29T16:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.028163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.028279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.028306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.028337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.028379 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.132325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.132387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.132404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.132437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.132455 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.235780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.235876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.235899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.235941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.235964 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.339446 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.339523 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.339541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.339565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.339584 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.399755 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:28 crc kubenswrapper[4746]: E0129 16:35:28.399911 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:28 crc kubenswrapper[4746]: E0129 16:35:28.399974 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:35:44.399955979 +0000 UTC m=+66.800540623 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.424965 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:08:55.001276963 +0000 UTC Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.442661 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.442702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.442710 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.442729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.442739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.445280 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.445342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.445373 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:28 crc kubenswrapper[4746]: E0129 16:35:28.445428 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:28 crc kubenswrapper[4746]: E0129 16:35:28.445602 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:28 crc kubenswrapper[4746]: E0129 16:35:28.445709 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.457967 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.470859 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.490898 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.530441 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.544429 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.545874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.546049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.546153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.546294 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.546387 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.562730 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.582320 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.601890 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.616960 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.630780 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.649577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.649656 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.649682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.649714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.649738 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.651837 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.671813 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.695440 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.714154 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.734488 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.747038 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.752380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.752427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.752436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.752456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.752467 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.759575 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.855875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.856319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.856633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.856864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.857107 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.960582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.960899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.961022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.961159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:28 crc kubenswrapper[4746]: I0129 16:35:28.961871 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:28Z","lastTransitionTime":"2026-01-29T16:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.065691 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.065759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.065781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.065807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.065826 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.168165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.168282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.168306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.168338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.168357 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.271981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.272051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.272073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.272103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.272122 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.375071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.375124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.375140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.375163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.375178 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.425245 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:41:11.828439236 +0000 UTC Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.444992 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:29 crc kubenswrapper[4746]: E0129 16:35:29.445216 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.478686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.478747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.478759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.478780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.478798 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.581911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.581994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.582011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.582041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.582060 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.685151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.685239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.685259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.685284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.685301 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.788762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.788826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.788844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.788873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.788891 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.893080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.893158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.893176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.893237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.893258 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.995836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.995891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.995903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.995923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:29 crc kubenswrapper[4746]: I0129 16:35:29.995936 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:29Z","lastTransitionTime":"2026-01-29T16:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.099244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.099296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.099308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.099326 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.099337 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.202849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.202948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.202975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.203017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.203042 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.221508 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.221668 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.221730 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:02.221691343 +0000 UTC m=+84.622276027 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.221963 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.222154 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:02.222115614 +0000 UTC m=+84.622700298 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.305747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.305778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.305787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.305801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.305811 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.322858 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.322935 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.322985 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323123 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323151 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323181 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323225 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323223 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:02.323180087 +0000 UTC m=+84.723764731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323153 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323312 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:02.32328558 +0000 UTC m=+84.723870394 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323338 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323360 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.323447 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:02.323399153 +0000 UTC m=+84.723984007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.409093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.409143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.409160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.409216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.409237 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.426035 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:44:19.014847371 +0000 UTC Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.445610 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.445638 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.445798 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.445927 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.446263 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:30 crc kubenswrapper[4746]: E0129 16:35:30.446487 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.512600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.512666 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.512683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.512713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.512734 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.617091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.617138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.617149 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.617169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.617201 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.720455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.720526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.720541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.720568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.720586 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.823785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.823844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.823855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.823875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.823889 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.927087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.927148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.927161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.927211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.927228 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:30Z","lastTransitionTime":"2026-01-29T16:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.960792 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.979914 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:30 crc kubenswrapper[4746]: I0129 16:35:30.994920 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.010501 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.028966 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.030648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.030769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.030846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.030921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.030992 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.043556 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.061308 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.079224 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.103255 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.116762 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.129687 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.133842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.133886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.133900 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.133919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.133930 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.140723 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.154142 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.167342 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.182667 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.198405 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.211911 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.227465 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.236258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.236303 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.236319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.236341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.236355 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.339296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.339356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.339372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.339401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.339417 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.427250 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:56:24.557428973 +0000 UTC Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.443590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.443684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.443714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.443750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.443776 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.444759 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:31 crc kubenswrapper[4746]: E0129 16:35:31.444922 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.547572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.547641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.547659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.547683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.547700 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.650315 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.650386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.650401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.650420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.650434 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.752876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.752930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.752944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.752965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.752981 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.856180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.856293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.856310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.856338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.856358 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.959224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.959265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.959276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.959294 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:31 crc kubenswrapper[4746]: I0129 16:35:31.959305 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:31Z","lastTransitionTime":"2026-01-29T16:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.062901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.062952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.062968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.062988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.063002 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.166718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.166780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.166799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.166826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.166845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.269906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.269972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.269987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.270011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.270025 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.372857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.372911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.372921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.372941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.372956 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.427683 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:49:28.250193908 +0000 UTC Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.445121 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.445210 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.445217 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:32 crc kubenswrapper[4746]: E0129 16:35:32.445331 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:32 crc kubenswrapper[4746]: E0129 16:35:32.445523 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:32 crc kubenswrapper[4746]: E0129 16:35:32.445743 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.476309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.476369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.476386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.476408 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.476427 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.579935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.580001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.580018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.580053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.580073 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.683232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.683301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.683319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.683348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.683366 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.787268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.787326 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.787340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.787360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.787375 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.890936 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.891000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.891011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.891030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.891045 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.994127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.994171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.994183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.994237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:32 crc kubenswrapper[4746]: I0129 16:35:32.994253 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:32Z","lastTransitionTime":"2026-01-29T16:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.097203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.097256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.097266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.097287 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.097300 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.200077 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.200126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.200136 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.200153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.200169 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.302875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.302929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.302941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.302963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.302977 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.405689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.405768 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.405790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.405816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.405837 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.428391 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:34:06.443928783 +0000 UTC Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.445155 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:33 crc kubenswrapper[4746]: E0129 16:35:33.445387 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.508679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.508732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.508747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.508767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.508781 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.612438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.612844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.612864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.612885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.612902 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.715696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.715745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.715754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.715775 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.715785 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.818958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.819018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.819028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.819049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.819062 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.922717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.922812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.922831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.922860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:33 crc kubenswrapper[4746]: I0129 16:35:33.922883 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:33Z","lastTransitionTime":"2026-01-29T16:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.026813 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.026892 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.026905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.026924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.026964 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.130351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.130419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.130434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.130471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.130495 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.233584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.233645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.233659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.233685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.233701 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.336961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.337031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.337046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.337067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.337081 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.429519 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:20:15.904245322 +0000 UTC Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.440448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.440518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.440530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.440568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.440590 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.444860 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.444985 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.445139 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:34 crc kubenswrapper[4746]: E0129 16:35:34.445032 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:34 crc kubenswrapper[4746]: E0129 16:35:34.445402 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:34 crc kubenswrapper[4746]: E0129 16:35:34.445588 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.543392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.543458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.543477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.543500 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.543517 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.646749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.646822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.646841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.646874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.646893 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.749844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.749904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.749915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.749935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.749946 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.852832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.852905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.852915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.852933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.852944 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.955908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.956009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.956030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.956062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:34 crc kubenswrapper[4746]: I0129 16:35:34.956084 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:34Z","lastTransitionTime":"2026-01-29T16:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.059659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.059705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.059718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.059736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.059747 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.163229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.163302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.163328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.163364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.163389 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.202388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.202499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.202805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.203030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.203109 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.218284 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.223472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.223502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.223514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.223534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.223548 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.237423 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.243118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.243179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.243232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.243266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.243284 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.257574 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.263133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.263167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.263180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.263219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.263237 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.278750 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.283680 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.283741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.283754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.283777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.283791 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.298629 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.298841 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.300605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.300649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.300659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.300679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.300695 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.405156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.405231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.405248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.405283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.405300 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.430144 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:39:08.757933974 +0000 UTC Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.445761 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:35 crc kubenswrapper[4746]: E0129 16:35:35.446104 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.509250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.509311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.509325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.509350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.509364 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.613153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.613232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.613245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.613266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.613282 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.715869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.715930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.715948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.715975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.715994 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.819973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.820023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.820039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.820066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.820084 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.922789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.922847 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.922860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.922880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:35 crc kubenswrapper[4746]: I0129 16:35:35.922894 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:35Z","lastTransitionTime":"2026-01-29T16:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.026785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.026834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.026845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.026865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.026878 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.130441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.130541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.130570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.130606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.130635 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.234080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.234151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.234165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.234229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.234256 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.338163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.338283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.338300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.338323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.338340 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.430590 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:30:43.593527457 +0000 UTC Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.441213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.441264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.441279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.441299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.441311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.445583 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.445656 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.445595 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:36 crc kubenswrapper[4746]: E0129 16:35:36.445787 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:36 crc kubenswrapper[4746]: E0129 16:35:36.445934 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:36 crc kubenswrapper[4746]: E0129 16:35:36.446100 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.544330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.544409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.544428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.544460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.544480 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.647713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.647756 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.647766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.647783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.647795 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.750493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.750601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.750632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.750674 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.750706 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.853359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.853432 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.853450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.853478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.853498 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.956908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.956979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.956997 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.957027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:36 crc kubenswrapper[4746]: I0129 16:35:36.957047 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:36Z","lastTransitionTime":"2026-01-29T16:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.059904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.059940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.059951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.059970 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.059982 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.163309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.163396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.163418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.163493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.163510 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.266571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.266638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.266658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.266684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.266704 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.369887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.369969 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.369986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.370010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.370027 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.430732 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:44:00.782314372 +0000 UTC Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.445132 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:37 crc kubenswrapper[4746]: E0129 16:35:37.445345 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.473426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.473499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.473516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.473543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.473561 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.576692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.576760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.576779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.576807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.576832 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.680496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.680549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.680563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.680584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.680600 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.784293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.784365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.784383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.784410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.784428 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.887969 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.888035 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.888058 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.888089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.888139 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.997639 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.997723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.997748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.997782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:37 crc kubenswrapper[4746]: I0129 16:35:37.997810 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:37Z","lastTransitionTime":"2026-01-29T16:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.101905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.101970 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.101986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.102011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.102029 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.205915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.205975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.205988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.206009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.206022 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.309766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.309836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.309853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.309880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.309901 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.413621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.413682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.413701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.413730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.413753 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.430960 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:58:08.776819601 +0000 UTC Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.444999 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.445120 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.444999 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:38 crc kubenswrapper[4746]: E0129 16:35:38.445343 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:38 crc kubenswrapper[4746]: E0129 16:35:38.445838 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:38 crc kubenswrapper[4746]: E0129 16:35:38.446029 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.447163 4746 scope.go:117] "RemoveContainer" containerID="2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4" Jan 29 16:35:38 crc kubenswrapper[4746]: E0129 16:35:38.447868 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.473296 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.492825 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.514327 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.516643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.516677 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.516687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.516706 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.516719 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.536926 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.562240 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.582507 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.602555 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.620998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.621044 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.621072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.621094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.621107 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.622738 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.642737 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.659165 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.692930 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.709578 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.726847 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.726921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.726933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.727012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.727028 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.727728 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.745704 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.768287 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.795123 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.812552 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.830043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.830119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.830133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.830174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.830225 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.933317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.933363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.933376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.933398 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:38 crc kubenswrapper[4746]: I0129 16:35:38.933417 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:38Z","lastTransitionTime":"2026-01-29T16:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.035741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.036730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.036819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.036899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.036987 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.141087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.141176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.141213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.141235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.141250 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.245651 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.246150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.246402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.246567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.246739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.350698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.351246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.351410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.351562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.351813 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.431765 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 03:41:20.263343775 +0000 UTC Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.445616 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:39 crc kubenswrapper[4746]: E0129 16:35:39.446262 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.456718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.456765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.456776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.456797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.456811 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.561870 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.561998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.562065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.562148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.562175 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.666023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.666088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.666102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.666129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.666144 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.769548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.769592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.769602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.769621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.769633 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.872300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.872339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.872350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.872368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.872380 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.975308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.975347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.975357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.975374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:39 crc kubenswrapper[4746]: I0129 16:35:39.975386 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:39Z","lastTransitionTime":"2026-01-29T16:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.079014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.079080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.079097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.079122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.079138 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.184034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.184103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.184115 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.184137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.184150 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.293134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.293214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.293225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.293242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.293253 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.396932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.396975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.396993 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.397017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.397035 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.432274 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:00:07.78319647 +0000 UTC Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.445985 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.446107 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:40 crc kubenswrapper[4746]: E0129 16:35:40.446234 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.446014 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:40 crc kubenswrapper[4746]: E0129 16:35:40.446341 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:40 crc kubenswrapper[4746]: E0129 16:35:40.446414 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.499924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.499998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.500012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.500040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.500058 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.602572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.602621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.602638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.602661 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.602678 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.705447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.705496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.705505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.705521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.705532 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.808908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.808955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.808968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.808988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.809002 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.911838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.911879 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.911888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.911905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:40 crc kubenswrapper[4746]: I0129 16:35:40.911915 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:40Z","lastTransitionTime":"2026-01-29T16:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.015125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.015164 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.015172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.015221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.015236 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.118695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.118748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.118759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.118780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.118794 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.222507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.222558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.222576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.222602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.222620 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.325387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.325458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.325479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.325499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.325513 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.428849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.428914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.428929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.428950 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.428971 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.434011 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:03:49.12820493 +0000 UTC Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.445371 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:41 crc kubenswrapper[4746]: E0129 16:35:41.445531 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.531844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.531885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.531894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.531912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.531922 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.634640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.634689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.634725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.634745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.634756 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.736844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.736884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.736899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.736915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.736924 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.840171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.840226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.840235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.840252 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.840261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.942881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.942916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.942928 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.942945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:41 crc kubenswrapper[4746]: I0129 16:35:41.942955 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:41Z","lastTransitionTime":"2026-01-29T16:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.045915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.045971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.045979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.045994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.046005 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.148921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.148983 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.148996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.149016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.149027 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.251203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.251247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.251284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.251302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.251318 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.353539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.353565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.353573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.353588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.353598 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.434923 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:17:51.006507798 +0000 UTC Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.445311 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.445368 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.445416 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:42 crc kubenswrapper[4746]: E0129 16:35:42.445494 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:42 crc kubenswrapper[4746]: E0129 16:35:42.445635 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:42 crc kubenswrapper[4746]: E0129 16:35:42.445806 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.456587 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.456634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.456648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.456667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.456679 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.559555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.559614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.559626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.559648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.559659 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.662445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.662490 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.662499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.662516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.662526 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.765386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.765434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.765445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.765465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.765477 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.868405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.868461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.868477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.868498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.868510 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.971382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.971442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.971451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.971470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:42 crc kubenswrapper[4746]: I0129 16:35:42.971482 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:42Z","lastTransitionTime":"2026-01-29T16:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.073866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.073941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.073953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.073972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.073986 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.177048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.177092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.177106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.177124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.177136 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.280630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.280682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.280698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.280719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.280733 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.384412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.384477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.384495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.384518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.384536 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.435259 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:08:36.977378628 +0000 UTC Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.445581 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:43 crc kubenswrapper[4746]: E0129 16:35:43.445731 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.486637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.486682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.486691 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.486708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.486720 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.589308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.589344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.589353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.589370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.589383 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.691389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.691426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.691440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.691458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.691469 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.816809 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.816844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.816853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.816870 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.816881 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.919460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.919496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.919504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.919519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:43 crc kubenswrapper[4746]: I0129 16:35:43.919530 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:43Z","lastTransitionTime":"2026-01-29T16:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.021598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.021657 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.021669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.021690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.021702 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.124041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.124139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.124157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.124185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.124225 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.227046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.227089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.227098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.227116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.227126 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.329342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.329389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.329401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.329421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.329435 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.420630 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:44 crc kubenswrapper[4746]: E0129 16:35:44.420893 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:44 crc kubenswrapper[4746]: E0129 16:35:44.421019 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:16.420986607 +0000 UTC m=+98.821571451 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.431822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.431854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.431869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.431885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.431895 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.436320 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:19:22.417982682 +0000 UTC Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.444693 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.444761 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.444701 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:44 crc kubenswrapper[4746]: E0129 16:35:44.444832 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:44 crc kubenswrapper[4746]: E0129 16:35:44.444974 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:44 crc kubenswrapper[4746]: E0129 16:35:44.445110 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.534701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.534736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.534744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.534759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.534767 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.637235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.637295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.637314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.637347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.637366 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.739964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.740050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.740077 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.740113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.740138 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.845086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.845129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.845147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.845167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.845180 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.948310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.948356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.948370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.948393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:44 crc kubenswrapper[4746]: I0129 16:35:44.948409 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:44Z","lastTransitionTime":"2026-01-29T16:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.051386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.051434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.051450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.051471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.051486 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.154477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.154999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.155158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.155339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.155486 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.259232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.259273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.259283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.259301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.259311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.362696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.362744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.362777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.362802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.362816 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.436614 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:31:32.237361503 +0000 UTC Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.445242 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.445416 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.465311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.465354 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.465362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.465380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.465390 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.527842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.527886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.527898 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.527917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.527928 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.540223 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.544396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.544429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.544438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.544455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.544467 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.557329 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.562931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.563064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.563275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.563394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.563494 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.582043 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.586073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.586110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.586119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.586140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.586151 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.601131 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.605474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.605506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.605515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.605541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.605552 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.618467 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: E0129 16:35:45.618591 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.620460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.620482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.620491 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.620507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.620516 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.723757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.723809 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.723822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.723842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.723856 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.826375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.826670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.826761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.826833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.826894 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.929044 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.929098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.929114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.929135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.929151 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:45Z","lastTransitionTime":"2026-01-29T16:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.930589 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/0.log" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.930653 4746 generic.go:334] "Generic (PLEG): container finished" podID="017d8376-e00b-442b-ac6b-b2189ff75132" containerID="121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8" exitCode=1 Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.930693 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerDied","Data":"121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8"} Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.931318 4746 scope.go:117] "RemoveContainer" containerID="121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.948258 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.962921 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:45 crc kubenswrapper[4746]: I0129 16:35:45.982943 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.001100 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:45Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.017344 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.031830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.031871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.031881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.031901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.031913 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.033794 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.048045 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.062594 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.077541 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.090433 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.107790 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.124818 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.134782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.134830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.134844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.134866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.134881 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.140537 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.156099 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.170110 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.185631 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.208578 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.238148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.238262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.238283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.238304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.238317 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.341440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.341480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.341492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.341513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.341527 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.437435 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:09:01.67920234 +0000 UTC Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.444756 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.444800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.444812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.444855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.444868 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.445347 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.445407 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.445423 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:46 crc kubenswrapper[4746]: E0129 16:35:46.445676 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:46 crc kubenswrapper[4746]: E0129 16:35:46.445803 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:46 crc kubenswrapper[4746]: E0129 16:35:46.445954 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.548244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.548300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.548313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.548334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.548349 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.651346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.651399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.651409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.651427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.651437 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.754080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.754142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.754159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.754222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.754243 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.858063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.858124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.858141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.858169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.858258 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.937492 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/0.log" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.937573 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerStarted","Data":"9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.960381 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.961714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.961778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.961805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.961837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.961862 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:46Z","lastTransitionTime":"2026-01-29T16:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.977750 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:46 crc kubenswrapper[4746]: I0129 16:35:46.990286 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:46Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.006221 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.019017 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.035745 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.050260 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.073155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.073220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.073233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.073256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.073270 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.083275 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.112844 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.128616 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.145134 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.159164 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.170882 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.176295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.176353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.176371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.176391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.176405 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.189829 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.199735 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.212515 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.229680 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:47Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.278816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.278856 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.278867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.278884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.278899 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.380964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.381012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.381024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.381040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.381051 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.438480 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:49:41.79013372 +0000 UTC Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.444643 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:47 crc kubenswrapper[4746]: E0129 16:35:47.444743 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.483602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.483658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.483668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.483682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.483693 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.586019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.586061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.586071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.586090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.586102 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.688568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.688623 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.688634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.688652 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.688665 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.791632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.791693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.791709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.791735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.791751 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.895547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.895596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.895608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.895629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.895641 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.998684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.998733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.998743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.998761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:47 crc kubenswrapper[4746]: I0129 16:35:47.998774 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:47Z","lastTransitionTime":"2026-01-29T16:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.102472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.102517 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.102529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.102546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.102558 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.205080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.205123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.205135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.205152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.205164 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.307884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.307919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.307927 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.307944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.307953 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.411165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.411228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.411240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.411258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.411268 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.439225 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:03:57.64527793 +0000 UTC Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.445628 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.445626 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.445773 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:48 crc kubenswrapper[4746]: E0129 16:35:48.445949 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:48 crc kubenswrapper[4746]: E0129 16:35:48.446054 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:48 crc kubenswrapper[4746]: E0129 16:35:48.446259 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.464571 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.477560 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.493627 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.505626 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.514927 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.514967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.514979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.515001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.515014 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.518874 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.535249 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.548934 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.568646 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.582747 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.599505 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.617237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.617549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.617649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.617741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.617826 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.623379 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.636826 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.649762 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.669039 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.687080 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.702436 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.717937 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.721120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.721169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.721225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.721244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.721352 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.824744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.824786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.824797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.824814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.824826 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.928093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.928441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.928504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.928597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:48 crc kubenswrapper[4746]: I0129 16:35:48.928696 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:48Z","lastTransitionTime":"2026-01-29T16:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.031040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.031088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.031099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.031120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.031132 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.133702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.133746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.133755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.133776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.133790 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.235885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.235924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.235933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.235948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.235958 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.338255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.338321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.338331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.338349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.338359 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.439387 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:28:17.275208788 +0000 UTC Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.441093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.441144 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.441154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.441172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.441202 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.444687 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:49 crc kubenswrapper[4746]: E0129 16:35:49.444830 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.544045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.544106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.544119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.544143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.544158 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.646825 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.646874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.646884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.646900 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.646910 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.749318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.749368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.749383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.749425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.749440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.852577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.852678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.852697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.852773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.852792 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.955680 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.955716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.955724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.955739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:49 crc kubenswrapper[4746]: I0129 16:35:49.955748 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:49Z","lastTransitionTime":"2026-01-29T16:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.059749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.059846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.059865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.059891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.059906 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:50Z","lastTransitionTime":"2026-01-29T16:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.163132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.163176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.163202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.163222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.163232 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:50Z","lastTransitionTime":"2026-01-29T16:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.265727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.265760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.265769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.265785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.265796 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:50Z","lastTransitionTime":"2026-01-29T16:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.368224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.368261 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.368272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.368290 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.368301 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:50Z","lastTransitionTime":"2026-01-29T16:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.439603 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:40:34.178524589 +0000 UTC Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.446008 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:50 crc kubenswrapper[4746]: E0129 16:35:50.446153 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.446386 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:50 crc kubenswrapper[4746]: E0129 16:35:50.446457 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.446675 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:50 crc kubenswrapper[4746]: E0129 16:35:50.446739 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.922323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.922370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.922387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.922423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:50 crc kubenswrapper[4746]: I0129 16:35:50.922437 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:50Z","lastTransitionTime":"2026-01-29T16:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.025576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.025624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.025637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.025657 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.025689 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.128023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.128453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.128689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.128895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.129071 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.232082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.232126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.232139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.232160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.232174 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.335041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.335088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.335102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.335127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.335142 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.438643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.438697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.438713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.438742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.438761 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.440821 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:31:16.975041016 +0000 UTC Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.445273 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:51 crc kubenswrapper[4746]: E0129 16:35:51.445521 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.446624 4746 scope.go:117] "RemoveContainer" containerID="2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.541929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.541963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.541972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.541991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.542001 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.644475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.644519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.644531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.644548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.644559 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.747675 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.747734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.747747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.747766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.747781 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.850427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.850458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.850466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.850480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.850490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.953684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.953732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.953742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.953760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.953771 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:51Z","lastTransitionTime":"2026-01-29T16:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.957860 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/2.log" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.961342 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.961916 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:35:51 crc kubenswrapper[4746]: I0129 16:35:51.977157 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.002525 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:51Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.016302 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.030490 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.044373 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.056802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.056852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.056862 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.056884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.057170 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.061318 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.082657 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.107506 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.122159 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.137814 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.151393 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.159425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.159459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.159471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.159487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.159498 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.168556 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.181137 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.194900 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.205514 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.222279 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.238590 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.263347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.263389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.263400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.263417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.263428 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.366627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.366694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.366711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.366733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.366751 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.441464 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:37:06.971942542 +0000 UTC Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.444780 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.444797 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:52 crc kubenswrapper[4746]: E0129 16:35:52.444951 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.444798 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:52 crc kubenswrapper[4746]: E0129 16:35:52.445068 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:52 crc kubenswrapper[4746]: E0129 16:35:52.445126 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.470107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.470156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.470166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.470198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.470210 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.572693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.572729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.572738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.572753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.572762 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.675275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.675336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.675350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.675372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.675384 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.779111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.779174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.779212 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.779235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.779246 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.882394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.882450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.882462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.882482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.882498 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.968291 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/3.log" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.969098 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/2.log" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.973171 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" exitCode=1 Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.973311 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.973387 4746 scope.go:117] "RemoveContainer" containerID="2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.974051 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:35:52 crc kubenswrapper[4746]: E0129 16:35:52.975791 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.987159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.987222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.987251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.987272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.987288 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:52Z","lastTransitionTime":"2026-01-29T16:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:52 crc kubenswrapper[4746]: I0129 16:35:52.996099 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.017157 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.032534 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.044414 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.056592 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.068748 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.081954 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.090377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.090418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.090430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.090451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.090490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.100627 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.127033 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bde515c8c3e7aa1857758816df9a6c671a67a995d5cdf7cf06cb1c6166b96d4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:23Z\\\",\\\"message\\\":\\\"ble:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:35:23.433694 6430 obj_retry.go:551] Creating *factory.egressNode crc took: 2.401705ms\\\\nI0129 16:35:23.433729 6430 factory.go:1336] Added *v1.Node event handler 7\\\\nI0129 16:35:23.433779 6430 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0129 16:35:23.433849 6430 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:35:23.433871 6430 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:35:23.433909 6430 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:35:23.433966 6430 factory.go:656] Stopping watch factory\\\\nI0129 16:35:23.433998 6430 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:35:23.434117 6430 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0129 16:35:23.434260 6430 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0129 16:35:23.434308 6430 ovnkube.go:599] Stopped ovnkube\\\\nI0129 16:35:23.434341 6430 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 16:35:23.434430 6430 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:52Z\\\",\\\"message\\\":\\\"default: []services.lbConfig(nil)\\\\nF0129 16:35:52.252051 6829 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:35:52.252051 6829 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, Af\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.140283 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.155977 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.173314 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.188726 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.200925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.200975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.200987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.201008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.201025 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.205650 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.233455 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.247889 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.266657 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:53Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.305025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.305082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.305118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.305139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.305156 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.408239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.408282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.408294 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.408310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.408323 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.441901 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:52:50.197548344 +0000 UTC Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.445425 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:53 crc kubenswrapper[4746]: E0129 16:35:53.445568 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.510662 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.510695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.510704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.510722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.510731 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.614607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.614753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.614783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.614826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.614847 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.718011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.718057 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.718067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.718086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.718098 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.822237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.822299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.822311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.822367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.822382 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.926556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.926615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.926625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.926644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.926656 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:53Z","lastTransitionTime":"2026-01-29T16:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.980704 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/3.log" Jan 29 16:35:53 crc kubenswrapper[4746]: I0129 16:35:53.985702 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:35:53 crc kubenswrapper[4746]: E0129 16:35:53.985928 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.007840 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.024695 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.030312 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.030381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.030392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.030413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.030445 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.041065 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.060914 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.077055 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.092391 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.107094 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.123306 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.133586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.133637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.133649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.133667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.133679 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.140086 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.155486 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.171663 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.182474 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.195331 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.205851 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.220354 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.234553 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.237032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.237076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.237090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.237111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.237126 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.255933 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:52Z\\\",\\\"message\\\":\\\"default: []services.lbConfig(nil)\\\\nF0129 16:35:52.252051 6829 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:35:52.252051 6829 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, Af\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.340166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.340246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.340260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.340279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.340296 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.442067 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:14:58.228271886 +0000 UTC Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.442449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.442498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.442510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.442530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.442543 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.444880 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.444881 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:54 crc kubenswrapper[4746]: E0129 16:35:54.445001 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:54 crc kubenswrapper[4746]: E0129 16:35:54.445052 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.444886 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:54 crc kubenswrapper[4746]: E0129 16:35:54.445224 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.544901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.544946 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.544956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.544974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.544985 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.647946 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.648017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.648034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.648057 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.648072 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.750637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.750683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.750696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.750717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.750730 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.853502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.853572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.853589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.853619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.853639 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.958282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.958336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.958350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.958371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:54 crc kubenswrapper[4746]: I0129 16:35:54.958385 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:54Z","lastTransitionTime":"2026-01-29T16:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.061770 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.061828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.061839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.061861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.061875 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.165232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.165291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.165309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.165332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.165347 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.268452 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.268507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.268520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.268542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.268553 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.372326 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.372406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.372420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.372445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.372461 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.443322 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:29:07.918352757 +0000 UTC Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.445783 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.446016 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.475649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.475689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.475698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.475714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.475724 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.579981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.580088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.580102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.580123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.580137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.683606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.683678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.683695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.683721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.683741 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.737308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.737366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.737378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.737407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.737422 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.757942 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.763881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.763978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.764003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.764051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.764076 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.787701 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.793571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.793637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.793651 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.793678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.793696 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.812219 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.818333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.818383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.818396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.818419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.818433 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.840699 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.846152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.846268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.846289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.846319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.846338 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.868173 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:55 crc kubenswrapper[4746]: E0129 16:35:55.868365 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.870515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.870560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.870575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.870597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.870611 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.973874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.973939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.973953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.973975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:55 crc kubenswrapper[4746]: I0129 16:35:55.973991 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:55Z","lastTransitionTime":"2026-01-29T16:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.077479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.077687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.077708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.077730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.077769 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.180591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.180667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.180686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.180718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.180738 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.284715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.284781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.284799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.284826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.284845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.388241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.388333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.388347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.388371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.388388 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.443820 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:04:12.505885775 +0000 UTC Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.445272 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.445350 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.445417 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:56 crc kubenswrapper[4746]: E0129 16:35:56.445490 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:56 crc kubenswrapper[4746]: E0129 16:35:56.445640 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:56 crc kubenswrapper[4746]: E0129 16:35:56.445888 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.493122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.493172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.493211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.493234 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.493248 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.596848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.596900 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.596913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.596975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.596988 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.700569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.700651 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.700671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.700697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.700716 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.804340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.804394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.804408 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.804431 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.804450 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.907795 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.907856 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.907869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.907890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:56 crc kubenswrapper[4746]: I0129 16:35:56.907903 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:56Z","lastTransitionTime":"2026-01-29T16:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.010542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.010608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.010625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.010654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.010671 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.115472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.115539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.115557 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.115582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.115599 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.219177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.219324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.219346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.219800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.220097 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.324579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.324635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.324653 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.324679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.324696 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.428559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.428589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.428597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.428614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.428625 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.444941 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:15:02.158815686 +0000 UTC Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.445117 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:57 crc kubenswrapper[4746]: E0129 16:35:57.445915 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.531766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.532175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.532323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.532425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.532591 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.636517 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.637178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.637888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.638218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.638362 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.741371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.741437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.741454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.741483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.741503 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.845119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.845178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.845240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.845274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.845299 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.949087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.949145 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.949158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.949181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:57 crc kubenswrapper[4746]: I0129 16:35:57.949211 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:57Z","lastTransitionTime":"2026-01-29T16:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.052736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.052797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.052813 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.052831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.052842 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.156090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.156130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.156140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.156156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.156168 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.259680 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.259733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.259750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.259773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.259791 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.365975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.366016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.366026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.366044 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.366056 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.446340 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:51:15.976272713 +0000 UTC Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.446535 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:35:58 crc kubenswrapper[4746]: E0129 16:35:58.446766 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.447601 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.447756 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:35:58 crc kubenswrapper[4746]: E0129 16:35:58.447923 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:35:58 crc kubenswrapper[4746]: E0129 16:35:58.448653 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.468738 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a046fe51633ae941c03e4fb1ad0fe34f4b1d0168bf165cd5d7c31e418a948140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.470695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.470781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.470804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.470832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.470850 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.490168 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-74h7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"017d8376-e00b-442b-ac6b-b2189ff75132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:45Z\\\",\\\"message\\\":\\\"2026-01-29T16:35:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9\\\\n2026-01-29T16:35:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_32596bdb-1028-4a3a-9a89-fd38fd89acf9 to /host/opt/cni/bin/\\\\n2026-01-29T16:35:00Z [verbose] multus-daemon started\\\\n2026-01-29T16:35:00Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:35:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4d5pt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-74h7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.509234 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12c966bfe70aac6888094f6d2bf2a4e1648c7d75011f2aaebad55a5aae34df89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c0c45d1bf21f9adbb91553e31548f632e798d67961aee6274607a83f257651d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.529280 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.547618 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlbj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a9167a7-c54e-41a0-8c25-71ebb3d7bdc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd36dfac27ae886acc4d6af06c65c0dbc002bdd4391eebc3456e4d8fc4ddfea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gddwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlbj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.564282 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c20d2bd9-a984-476f-855f-6a0365ccdab7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2934911a05f89174fc07d4597f41df6d99964024c5f8000798a21d0b21fafa66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5t4vb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8vzgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.574898 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.574953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.574974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.575004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.575027 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.588016 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7j88d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff347c3f-89aa-44c3-8cd2-29eea69d6bee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5448f618726d08d6d6cb55176a078522ad25e5a234171d1150161ce7d228ba20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac09d1685e1f2df0d052a519e21ab52d05f0a5bc2f94ea4d86d48dc724f4002e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78e5c3741ef88eb739bdac2c04bbf481ff33c4810d89fe0241570409ef2c2ca7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff7c607f4cdf7a883234985a6ad4d0a692f0be700fbe44b73c8f3aa41449bd20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://695c747acc221e72abde46e177880d15f53cec96cae77bf37651e12820144aeb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f789238ee9436d8b1cf19d88b7b735c04d732ab7887851c88fa6703d00c268f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://042ebc68ded9995a0a0af0dfa5613bbc204e0e378161bc19aa1a955fd97295eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:35:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w489f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7j88d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.605813 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c54750c7-4a46-4649-8b31-402a5bdacfb5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://230cfa40708cd768636c280ae85008a767ca4643af7b266f19de11b59e714413\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4edec8a337fa0d54945d316db3eb55aa5a288db74daa09243eca78d6e3b3151\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97570587ccf3753e80d3afe5b629fc0cc861396cc024609c0a86626ad9067f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d2ab9684e29552bac3da56496b6cd6f5cfc52efa0fae3af48ac740f5690b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.634443 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50599064-6fa5-43ed-9c1d-a58b3180d421\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:35:52Z\\\",\\\"message\\\":\\\"default: []services.lbConfig(nil)\\\\nF0129 16:35:52.252051 6829 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:52Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:35:52.252051 6829 services_controller.go:451] Built service openshift-machine-api/machine-api-operator-webhook cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, Af\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:35:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ht6sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdwxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.651671 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rl2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae29a6fb-63c0-4daf-8710-c11c2532e5f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78e2c2afaa2b9761c22c4a844cfb99654c274484901dbb38ea248d0818ca38f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srvdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rl2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.670738 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d211bc9-9005-4fe1-9d35-66e3d94cfc3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d69166ac34ce0d5d95622d5586251614fe9176a255bcc797abcbf31b3fe5741\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b45f538bd8ae04860d3c3c1d09eafc46ea49d3dbae118011662952c6bd65de1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtrx5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wlqq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.678619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.678683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.678703 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.678730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.678749 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.688127 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-f72wn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed3cddee-6243-41b8-9ac3-7ef6772d2960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pr7ps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:35:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-f72wn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.709032 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a8bea71-abba-4930-ada6-edf619cb771b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ab8874a30c4914696a5442d52dea594a8100c59b78cdb1a743b1ac4d8bfbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54582f48aa2c3ea6cb25ac771ff45b2f12d356bf1edde257901e4fedb6ea0fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://34255b2f8fa63db56736ae4554cabc191376ec4490865db7eab371f3fbd23496\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.724520 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c78854b0c5fead5a89e1bb4de50e285f799465ed780179d4300349c782919681\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.742554 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.760752 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0799c787-c274-4e25-a72c-0b56d6c03fdd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"file observer\\\\nW0129 16:34:58.210505 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:34:58.210642 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:34:58.211618 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-545950506/tls.crt::/tmp/serving-cert-545950506/tls.key\\\\\\\"\\\\nI0129 16:34:58.418512 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:34:58.425951 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:34:58.426006 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:34:58.426042 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:34:58.426049 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:34:58.440583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:34:58.440607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:34:58.440615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:34:58.440618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:34:58.440620 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:34:58.440623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 16:34:58.440791 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 16:34:58.443129 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:34:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:34:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:34:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:34:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.778700 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:34:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:35:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.781631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.781651 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.781659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.781674 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.781682 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.884981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.885034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.885049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.885071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.885082 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.988471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.988553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.988573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.988637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:58 crc kubenswrapper[4746]: I0129 16:35:58.988661 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:58Z","lastTransitionTime":"2026-01-29T16:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.091663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.092019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.092100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.092215 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.092303 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.196177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.196241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.196251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.196269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.196281 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.299173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.299593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.299605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.299621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.299632 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.402861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.402916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.402930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.402957 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.402974 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.444914 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:35:59 crc kubenswrapper[4746]: E0129 16:35:59.445144 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.447535 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:50:11.982721719 +0000 UTC Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.506063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.506120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.506129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.506147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.506171 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.609237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.609299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.609311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.609332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.609346 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.712060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.712119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.712138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.712166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.712185 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.815460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.815555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.815580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.815609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.815631 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.918587 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.918628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.918638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.918657 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:35:59 crc kubenswrapper[4746]: I0129 16:35:59.918670 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:35:59Z","lastTransitionTime":"2026-01-29T16:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.021749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.021815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.021842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.021875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.021901 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.124996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.125116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.125141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.125173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.125244 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.229339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.229482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.229507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.229535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.229554 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.333556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.333630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.333649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.333679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.333703 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.436673 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.436722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.436735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.436757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.436776 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.445520 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.445519 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:00 crc kubenswrapper[4746]: E0129 16:36:00.445694 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.445520 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:00 crc kubenswrapper[4746]: E0129 16:36:00.445936 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:00 crc kubenswrapper[4746]: E0129 16:36:00.446089 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.448720 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:05:59.355973557 +0000 UTC Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.540021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.540077 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.540089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.540111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.540125 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.644331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.644413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.644437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.644469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.644490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.748736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.749261 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.749511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.749646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.749731 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.853730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.853811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.853836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.853864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.853882 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.958937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.959439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.959645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.959801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:00 crc kubenswrapper[4746]: I0129 16:36:00.959953 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:00Z","lastTransitionTime":"2026-01-29T16:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.063462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.063546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.063565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.063596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.063631 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.167708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.167807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.167830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.167870 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.167896 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.271167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.271222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.271232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.271247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.271259 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.375246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.375314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.375322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.375340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.375351 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.445273 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:01 crc kubenswrapper[4746]: E0129 16:36:01.445721 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.449285 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:41:07.938190609 +0000 UTC Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.460722 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.479119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.479224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.479247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.479281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.479303 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.583324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.583420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.583443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.583479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.583500 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.685538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.685602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.685615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.685634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.685646 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.788564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.788603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.788618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.788643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.788661 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.891286 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.891361 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.891380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.891406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.891427 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.994519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.994630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.994651 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.994720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:01 crc kubenswrapper[4746]: I0129 16:36:01.994739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:01Z","lastTransitionTime":"2026-01-29T16:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.099031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.099531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.099735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.099937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.100426 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.204316 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.204393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.204413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.204456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.204521 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.279554 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.279729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.279955 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.279912514 +0000 UTC m=+148.680497168 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.279961 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.280096 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.28008638 +0000 UTC m=+148.680671034 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.308915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.308976 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.308990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.309013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.309030 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.381429 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.381523 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.381554 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.381680 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.381752 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.381733475 +0000 UTC m=+148.782318129 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382024 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382042 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382058 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382119 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.382110166 +0000 UTC m=+148.782694820 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382316 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382337 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382348 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.382376 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.382368093 +0000 UTC m=+148.782952747 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.413758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.413835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.413863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.413899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.413918 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.445712 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.445828 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.445749 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.445964 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.446294 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:02 crc kubenswrapper[4746]: E0129 16:36:02.446169 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.450270 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:36:59.971369344 +0000 UTC Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.518503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.518574 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.518595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.518620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.518639 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.622747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.622826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.622849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.622877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.622899 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.729045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.729274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.729312 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.729344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.729373 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.833314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.833397 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.833421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.833453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.833477 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.937554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.937640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.937655 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.937722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:02 crc kubenswrapper[4746]: I0129 16:36:02.937740 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:02Z","lastTransitionTime":"2026-01-29T16:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.041264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.041318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.041332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.041353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.041368 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.145082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.145172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.145251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.145285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.145305 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.249683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.250330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.250356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.250391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.250413 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.353646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.353686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.353702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.353722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.353733 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.475473 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:03 crc kubenswrapper[4746]: E0129 16:36:03.476504 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.476155 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:55:09.317185323 +0000 UTC Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.477804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.477847 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.477860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.477879 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.477896 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.581846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.581909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.581920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.581942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.581960 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.684926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.684986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.685001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.685027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.685043 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.788217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.788607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.788626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.788647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.788658 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.891636 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.891777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.891805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.891835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.891848 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.995911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.995964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.995984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.996001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:03 crc kubenswrapper[4746]: I0129 16:36:03.996014 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:03Z","lastTransitionTime":"2026-01-29T16:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.099337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.099430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.099455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.099486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.099505 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.202591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.202663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.202681 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.202708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.202727 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.306487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.306588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.306616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.306658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.306722 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.410099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.410177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.410240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.410273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.410295 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.446008 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.446089 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.446008 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:04 crc kubenswrapper[4746]: E0129 16:36:04.446311 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:04 crc kubenswrapper[4746]: E0129 16:36:04.446539 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:04 crc kubenswrapper[4746]: E0129 16:36:04.446969 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.477636 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:11:17.001193396 +0000 UTC Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.513834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.513899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.513917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.513946 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.513965 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.616470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.616537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.616555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.616584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.616603 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.719177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.719232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.719281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.719298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.719308 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.822331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.822396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.822409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.822433 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.822448 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.926047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.926111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.926130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.926155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:04 crc kubenswrapper[4746]: I0129 16:36:04.926177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:04Z","lastTransitionTime":"2026-01-29T16:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.030117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.030243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.030269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.030308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.030338 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.134105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.134232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.134252 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.134292 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.134311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.237416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.237494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.237512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.237544 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.237566 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.340848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.340905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.340924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.340951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.340970 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.444740 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.444830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.444832 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.444860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.444895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.444922 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: E0129 16:36:05.445040 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.477879 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:26:12.697258292 +0000 UTC Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.548085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.548236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.548275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.548309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.548332 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.651744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.651823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.651842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.651872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.651896 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.754633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.754683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.754699 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.754723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.754740 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.858468 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.858534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.858555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.858589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.858613 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.962291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.962365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.962383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.962412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:05 crc kubenswrapper[4746]: I0129 16:36:05.962432 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:05Z","lastTransitionTime":"2026-01-29T16:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.065738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.065821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.065845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.065873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.065894 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.128843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.128920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.128960 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.128996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.129027 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.152249 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:36:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.163241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.163304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.163318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.163344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.163359 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.183481 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:36:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.188369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.188452 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.188474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.188503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.188523 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.206924 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:36:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.213174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.213277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.213300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.213331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.213350 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.234778 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:36:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.240456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.240526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.240546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.240573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.240593 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.262684 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:36:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"36d7a0f4-88b9-425a-915e-1df9cb8c68bf\\\",\\\"systemUUID\\\":\\\"a3b8f3d1-c6d9-472d-8c83-12b7d56140ac\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:36:06Z is after 2025-08-24T17:21:41Z" Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.262921 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.265606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.265665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.265684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.265709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.265729 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.370017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.370345 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.370391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.370430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.370459 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.445082 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.445328 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.445328 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.445501 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.445696 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:06 crc kubenswrapper[4746]: E0129 16:36:06.445851 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.474127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.474300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.474370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.474403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.474499 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.478115 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:16:36.053755116 +0000 UTC Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.578120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.578184 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.578228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.578248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.578260 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.681648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.681721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.681740 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.681772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.681796 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.784881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.784962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.784981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.785009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.785029 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.888850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.888930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.888949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.888979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.889001 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.992578 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.992654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.992674 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.992703 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:06 crc kubenswrapper[4746]: I0129 16:36:06.992723 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:06Z","lastTransitionTime":"2026-01-29T16:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.096816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.096890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.096916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.096987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.097008 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.201741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.201854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.201890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.201932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.201963 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.305785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.305861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.305880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.305913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.305933 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.410744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.410826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.410851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.410887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.410913 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.445309 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:07 crc kubenswrapper[4746]: E0129 16:36:07.446115 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.446541 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:36:07 crc kubenswrapper[4746]: E0129 16:36:07.446926 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.478950 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:03:36.373695135 +0000 UTC Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.514304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.514346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.514358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.514380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.514393 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.618646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.618708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.618791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.618879 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.618925 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.722145 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.722700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.722918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.723110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.723393 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.834700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.835320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.835659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.835972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.836956 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.941392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.941475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.941499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.941566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:07 crc kubenswrapper[4746]: I0129 16:36:07.941599 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:07Z","lastTransitionTime":"2026-01-29T16:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.044379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.044895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.045070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.045256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.045467 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.149405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.149486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.149507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.149540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.149566 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.252926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.253004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.253025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.253054 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.253074 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.357237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.357305 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.357325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.357352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.357370 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.445018 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.445128 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:08 crc kubenswrapper[4746]: E0129 16:36:08.445259 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.445336 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:08 crc kubenswrapper[4746]: E0129 16:36:08.445495 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:08 crc kubenswrapper[4746]: E0129 16:36:08.445640 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.460859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.460942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.460964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.460999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.461028 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.480084 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:11:06.825776336 +0000 UTC Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.512834 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.512795634 podStartE2EDuration="1m10.512795634s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.490087559 +0000 UTC m=+90.890672243" watchObservedRunningTime="2026-01-29 16:36:08.512795634 +0000 UTC m=+90.913380318" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.567585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.567648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.567663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.567687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.567705 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.591773 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.591742095 podStartE2EDuration="7.591742095s" podCreationTimestamp="2026-01-29 16:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.566263321 +0000 UTC m=+90.966848005" watchObservedRunningTime="2026-01-29 16:36:08.591742095 +0000 UTC m=+90.992326769" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.633286 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-74h7n" podStartSLOduration=71.633246606 podStartE2EDuration="1m11.633246606s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.633075651 +0000 UTC m=+91.033660335" watchObservedRunningTime="2026-01-29 16:36:08.633246606 +0000 UTC m=+91.033831290" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.647033 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podStartSLOduration=71.646998971 podStartE2EDuration="1m11.646998971s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.646632391 +0000 UTC m=+91.047217035" watchObservedRunningTime="2026-01-29 16:36:08.646998971 +0000 UTC m=+91.047583615" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.664767 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7j88d" podStartSLOduration=71.664748018 podStartE2EDuration="1m11.664748018s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.664150651 +0000 UTC m=+91.064735305" watchObservedRunningTime="2026-01-29 16:36:08.664748018 +0000 UTC m=+91.065332662" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.669646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.669696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.669707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.669727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.669737 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.684814 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.684789829 podStartE2EDuration="42.684789829s" podCreationTimestamp="2026-01-29 16:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.682763912 +0000 UTC m=+91.083348576" watchObservedRunningTime="2026-01-29 16:36:08.684789829 +0000 UTC m=+91.085374473" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.714253 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wlbj9" podStartSLOduration=71.714214892 podStartE2EDuration="1m11.714214892s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.713445211 +0000 UTC m=+91.114029875" watchObservedRunningTime="2026-01-29 16:36:08.714214892 +0000 UTC m=+91.114799546" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.748959 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wlqq2" podStartSLOduration=70.748929324 podStartE2EDuration="1m10.748929324s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.73232169 +0000 UTC m=+91.132906374" watchObservedRunningTime="2026-01-29 16:36:08.748929324 +0000 UTC m=+91.149513988" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.763430 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=64.76340789 podStartE2EDuration="1m4.76340789s" podCreationTimestamp="2026-01-29 16:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.761943988 +0000 UTC m=+91.162528632" watchObservedRunningTime="2026-01-29 16:36:08.76340789 +0000 UTC m=+91.163992534" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.771745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.771802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.771824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.771848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.771862 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.828389 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6rl2h" podStartSLOduration=71.828362787 podStartE2EDuration="1m11.828362787s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:08.827784201 +0000 UTC m=+91.228368845" watchObservedRunningTime="2026-01-29 16:36:08.828362787 +0000 UTC m=+91.228947441" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.876485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.876546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.876557 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.876576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.876586 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.979434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.979513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.979532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.979560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:08 crc kubenswrapper[4746]: I0129 16:36:08.979576 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:08Z","lastTransitionTime":"2026-01-29T16:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.083860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.084274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.084375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.084471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.084556 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.187276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.187627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.187696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.187764 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.187833 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.328843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.328904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.328918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.328940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.328955 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.432371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.432441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.432457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.432481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.432499 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.445499 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:09 crc kubenswrapper[4746]: E0129 16:36:09.445668 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.481182 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:00:47.269199567 +0000 UTC Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.536106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.536255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.536283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.536322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.536350 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.639905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.639980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.639998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.640031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.640052 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.744361 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.744442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.744461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.744489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.744508 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.847801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.847886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.847912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.847943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.847964 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.951763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.951835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.951854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.951879 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:09 crc kubenswrapper[4746]: I0129 16:36:09.951893 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:09Z","lastTransitionTime":"2026-01-29T16:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.055667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.055752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.055777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.055811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.055835 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.160350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.160443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.160472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.160505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.160530 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.265001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.265067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.265084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.265109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.265132 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.367683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.367734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.367754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.367776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.367793 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.445220 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.445379 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.445433 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:10 crc kubenswrapper[4746]: E0129 16:36:10.445631 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:10 crc kubenswrapper[4746]: E0129 16:36:10.445731 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:10 crc kubenswrapper[4746]: E0129 16:36:10.445839 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.470125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.470214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.470230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.470254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.470267 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.481883 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:01:58.009653147 +0000 UTC Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.573460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.573525 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.573540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.573571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.573585 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.680964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.681053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.681082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.681123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.681159 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.785253 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.785318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.785336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.785364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.785382 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.888828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.888937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.888973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.889006 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.889025 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.992875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.992914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.992926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.992947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:10 crc kubenswrapper[4746]: I0129 16:36:10.992961 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:10Z","lastTransitionTime":"2026-01-29T16:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.095913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.095990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.096009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.096043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.096067 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.200640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.200746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.200774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.200812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.200841 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.304308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.304365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.304383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.304413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.304432 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.408445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.408499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.408516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.408541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.408560 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.445056 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:11 crc kubenswrapper[4746]: E0129 16:36:11.445501 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.482837 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:50:44.866463193 +0000 UTC Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.512166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.512259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.512282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.512313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.512339 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.616337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.616411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.616430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.616455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.616476 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.720442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.720540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.720579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.720618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.720646 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.824624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.824695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.824713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.824739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.824759 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.928528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.928679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.928710 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.928881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:11 crc kubenswrapper[4746]: I0129 16:36:11.928992 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:11Z","lastTransitionTime":"2026-01-29T16:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.031781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.031834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.031851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.031877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.031896 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.135100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.135160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.135179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.135236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.135257 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.238744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.238814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.238846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.238894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.238925 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.342518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.342576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.342596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.342663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.342697 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.445392 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:12 crc kubenswrapper[4746]: E0129 16:36:12.445561 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.445638 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.445603 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:12 crc kubenswrapper[4746]: E0129 16:36:12.445879 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:12 crc kubenswrapper[4746]: E0129 16:36:12.446036 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.446062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.446104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.446128 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.446159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.446226 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.483881 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:02:27.364544057 +0000 UTC Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.550003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.550084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.550110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.550146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.550170 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.654449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.654521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.654541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.654572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.654591 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.757942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.758003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.758024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.758052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.758070 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.862614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.862683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.862705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.862733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.862754 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.966858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.966945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.966965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.966995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:12 crc kubenswrapper[4746]: I0129 16:36:12.967014 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:12Z","lastTransitionTime":"2026-01-29T16:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.070889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.070959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.070980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.071010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.071029 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.174513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.174572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.174588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.174613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.174632 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.278649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.278732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.278752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.279244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.279300 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.383956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.384052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.384072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.384103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.384124 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.445416 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:13 crc kubenswrapper[4746]: E0129 16:36:13.445616 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.484397 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:27:56.100736475 +0000 UTC Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.486924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.486986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.487000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.487024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.487040 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.598610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.598705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.598732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.598768 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.598795 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.701973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.702014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.702024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.702042 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.702054 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.805942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.805987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.806005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.806030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.806041 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.909366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.909819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.909917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.910020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:13 crc kubenswrapper[4746]: I0129 16:36:13.910117 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:13Z","lastTransitionTime":"2026-01-29T16:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.013078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.013127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.013137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.013161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.013175 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.117371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.117405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.117423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.117444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.117455 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.220715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.221052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.221112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.221209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.221271 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.325252 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.326094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.326168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.326256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.326516 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.431208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.431252 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.431263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.431283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.431296 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.444892 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.444933 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.444955 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:14 crc kubenswrapper[4746]: E0129 16:36:14.445163 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:14 crc kubenswrapper[4746]: E0129 16:36:14.445270 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:14 crc kubenswrapper[4746]: E0129 16:36:14.445384 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.485275 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:43:49.245686568 +0000 UTC Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.534045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.534391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.534471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.534541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.534602 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.637412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.637484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.637504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.637533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.637557 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.741238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.741335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.741353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.741381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.741400 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.844754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.844799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.844816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.844833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.844844 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.947255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.947314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.947329 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.947348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:14 crc kubenswrapper[4746]: I0129 16:36:14.947360 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:14Z","lastTransitionTime":"2026-01-29T16:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.050802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.051357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.051551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.051702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.051845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.155126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.155172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.155183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.155225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.155237 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.257833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.258572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.258644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.258679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.258703 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.362003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.362073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.362092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.362118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.362136 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.445315 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:15 crc kubenswrapper[4746]: E0129 16:36:15.445528 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.465346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.465410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.465437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.465465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.465486 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.485977 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:40:34.236223111 +0000 UTC Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.569322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.569416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.569451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.569483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.569506 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.672977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.673056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.673081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.673113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.673137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.776462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.776528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.776548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.776572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.776592 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.880892 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.880955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.880976 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.881003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.881024 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.984180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.984265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.984279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.984301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:15 crc kubenswrapper[4746]: I0129 16:36:15.984320 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:15Z","lastTransitionTime":"2026-01-29T16:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.087533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.087582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.087598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.087621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.087638 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.191015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.191102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.191116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.191135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.191148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.294358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.294406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.294419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.294438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.294450 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.398379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.398432 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.398443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.398463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.398474 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.445140 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.445148 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.445463 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:16 crc kubenswrapper[4746]: E0129 16:36:16.445475 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:16 crc kubenswrapper[4746]: E0129 16:36:16.445560 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:16 crc kubenswrapper[4746]: E0129 16:36:16.445697 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.452220 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:16 crc kubenswrapper[4746]: E0129 16:36:16.452279 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:36:16 crc kubenswrapper[4746]: E0129 16:36:16.452330 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs podName:ed3cddee-6243-41b8-9ac3-7ef6772d2960 nodeName:}" failed. No retries permitted until 2026-01-29 16:37:20.452314505 +0000 UTC m=+162.852899149 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs") pod "network-metrics-daemon-f72wn" (UID: "ed3cddee-6243-41b8-9ac3-7ef6772d2960") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.486463 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:48:00.277935078 +0000 UTC Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.501240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.501306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.501322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.501342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.501358 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.604725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.604762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.604772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.604790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.604806 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.611086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.611115 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.611126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.611143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.611157 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:36:16Z","lastTransitionTime":"2026-01-29T16:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.672510 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd"] Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.673255 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.677698 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.680692 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.680701 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.680845 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.756390 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7bd780e1-6dc6-4166-ac94-177b15f508c9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.756488 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7bd780e1-6dc6-4166-ac94-177b15f508c9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.756689 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bd780e1-6dc6-4166-ac94-177b15f508c9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.756777 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7bd780e1-6dc6-4166-ac94-177b15f508c9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.756874 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd780e1-6dc6-4166-ac94-177b15f508c9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.865541 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7bd780e1-6dc6-4166-ac94-177b15f508c9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.865706 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7bd780e1-6dc6-4166-ac94-177b15f508c9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.865841 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bd780e1-6dc6-4166-ac94-177b15f508c9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.865950 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7bd780e1-6dc6-4166-ac94-177b15f508c9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.866678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd780e1-6dc6-4166-ac94-177b15f508c9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.866733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7bd780e1-6dc6-4166-ac94-177b15f508c9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.866821 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7bd780e1-6dc6-4166-ac94-177b15f508c9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.867914 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7bd780e1-6dc6-4166-ac94-177b15f508c9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.878290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7bd780e1-6dc6-4166-ac94-177b15f508c9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.887377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7bd780e1-6dc6-4166-ac94-177b15f508c9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kghwd\" (UID: \"7bd780e1-6dc6-4166-ac94-177b15f508c9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:16 crc kubenswrapper[4746]: I0129 16:36:16.994001 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" Jan 29 16:36:17 crc kubenswrapper[4746]: I0129 16:36:17.083920 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" event={"ID":"7bd780e1-6dc6-4166-ac94-177b15f508c9","Type":"ContainerStarted","Data":"314880f507f43340e18d0f181595b8a4b29a4996d82a2e380400682ecf4a27e5"} Jan 29 16:36:17 crc kubenswrapper[4746]: I0129 16:36:17.445632 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:17 crc kubenswrapper[4746]: E0129 16:36:17.446547 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:17 crc kubenswrapper[4746]: I0129 16:36:17.486722 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:50:53.278982915 +0000 UTC Jan 29 16:36:17 crc kubenswrapper[4746]: I0129 16:36:17.487321 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 16:36:17 crc kubenswrapper[4746]: I0129 16:36:17.495936 4746 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:36:18 crc kubenswrapper[4746]: I0129 16:36:18.090552 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" event={"ID":"7bd780e1-6dc6-4166-ac94-177b15f508c9","Type":"ContainerStarted","Data":"6c819c0c8970996e184e981e7d18c60107ed2f4e81480e0467e251d81490eb0a"} Jan 29 16:36:18 crc kubenswrapper[4746]: I0129 16:36:18.116995 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kghwd" podStartSLOduration=81.116964159 podStartE2EDuration="1m21.116964159s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:18.116745673 +0000 UTC m=+100.517330357" watchObservedRunningTime="2026-01-29 16:36:18.116964159 +0000 UTC m=+100.517548843" Jan 29 16:36:18 crc kubenswrapper[4746]: I0129 16:36:18.445306 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:18 crc kubenswrapper[4746]: E0129 16:36:18.446849 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:18 crc kubenswrapper[4746]: I0129 16:36:18.446895 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:18 crc kubenswrapper[4746]: I0129 16:36:18.447012 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:18 crc kubenswrapper[4746]: E0129 16:36:18.447084 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:18 crc kubenswrapper[4746]: E0129 16:36:18.447302 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:19 crc kubenswrapper[4746]: I0129 16:36:19.447596 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:19 crc kubenswrapper[4746]: E0129 16:36:19.447835 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:20 crc kubenswrapper[4746]: I0129 16:36:20.445690 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:20 crc kubenswrapper[4746]: I0129 16:36:20.445863 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:20 crc kubenswrapper[4746]: E0129 16:36:20.445964 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:20 crc kubenswrapper[4746]: I0129 16:36:20.446005 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:20 crc kubenswrapper[4746]: E0129 16:36:20.446271 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:20 crc kubenswrapper[4746]: E0129 16:36:20.446487 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:21 crc kubenswrapper[4746]: I0129 16:36:21.445333 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:21 crc kubenswrapper[4746]: E0129 16:36:21.445999 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:21 crc kubenswrapper[4746]: I0129 16:36:21.446613 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:36:21 crc kubenswrapper[4746]: E0129 16:36:21.446889 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdwxv_openshift-ovn-kubernetes(50599064-6fa5-43ed-9c1d-a58b3180d421)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" Jan 29 16:36:22 crc kubenswrapper[4746]: I0129 16:36:22.445846 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:22 crc kubenswrapper[4746]: I0129 16:36:22.445941 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:22 crc kubenswrapper[4746]: I0129 16:36:22.446362 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:22 crc kubenswrapper[4746]: E0129 16:36:22.446535 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:22 crc kubenswrapper[4746]: E0129 16:36:22.446668 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:22 crc kubenswrapper[4746]: E0129 16:36:22.447281 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:22 crc kubenswrapper[4746]: I0129 16:36:22.470834 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 16:36:23 crc kubenswrapper[4746]: I0129 16:36:23.445414 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:23 crc kubenswrapper[4746]: E0129 16:36:23.446097 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:24 crc kubenswrapper[4746]: I0129 16:36:24.445530 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:24 crc kubenswrapper[4746]: E0129 16:36:24.445756 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:24 crc kubenswrapper[4746]: I0129 16:36:24.446169 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:24 crc kubenswrapper[4746]: E0129 16:36:24.447036 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:24 crc kubenswrapper[4746]: I0129 16:36:24.446320 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:24 crc kubenswrapper[4746]: E0129 16:36:24.447510 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:25 crc kubenswrapper[4746]: I0129 16:36:25.444699 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:25 crc kubenswrapper[4746]: E0129 16:36:25.444908 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:26 crc kubenswrapper[4746]: I0129 16:36:26.444969 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:26 crc kubenswrapper[4746]: I0129 16:36:26.445114 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:26 crc kubenswrapper[4746]: I0129 16:36:26.445385 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:26 crc kubenswrapper[4746]: E0129 16:36:26.445593 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:26 crc kubenswrapper[4746]: E0129 16:36:26.445832 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:26 crc kubenswrapper[4746]: E0129 16:36:26.445942 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:27 crc kubenswrapper[4746]: I0129 16:36:27.445048 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:27 crc kubenswrapper[4746]: E0129 16:36:27.445321 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:28 crc kubenswrapper[4746]: I0129 16:36:28.445473 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:28 crc kubenswrapper[4746]: I0129 16:36:28.445600 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:28 crc kubenswrapper[4746]: E0129 16:36:28.446796 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:28 crc kubenswrapper[4746]: I0129 16:36:28.446866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:28 crc kubenswrapper[4746]: E0129 16:36:28.446961 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:28 crc kubenswrapper[4746]: E0129 16:36:28.447396 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:29 crc kubenswrapper[4746]: I0129 16:36:29.444876 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:29 crc kubenswrapper[4746]: E0129 16:36:29.445046 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:30 crc kubenswrapper[4746]: I0129 16:36:30.445074 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:30 crc kubenswrapper[4746]: I0129 16:36:30.445078 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:30 crc kubenswrapper[4746]: I0129 16:36:30.445260 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:30 crc kubenswrapper[4746]: E0129 16:36:30.445573 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:30 crc kubenswrapper[4746]: E0129 16:36:30.445679 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:30 crc kubenswrapper[4746]: E0129 16:36:30.445464 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:31 crc kubenswrapper[4746]: I0129 16:36:31.445403 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:31 crc kubenswrapper[4746]: E0129 16:36:31.445639 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.143679 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/1.log" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.144424 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/0.log" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.144485 4746 generic.go:334] "Generic (PLEG): container finished" podID="017d8376-e00b-442b-ac6b-b2189ff75132" containerID="9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10" exitCode=1 Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.144525 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerDied","Data":"9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10"} Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.144569 4746 scope.go:117] "RemoveContainer" containerID="121b33bb48425a29b8112844b0dead0dfbbd73fd22db4e151441cb0f9cd1fea8" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.145325 4746 scope.go:117] "RemoveContainer" containerID="9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10" Jan 29 16:36:32 crc kubenswrapper[4746]: E0129 16:36:32.145653 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-74h7n_openshift-multus(017d8376-e00b-442b-ac6b-b2189ff75132)\"" pod="openshift-multus/multus-74h7n" podUID="017d8376-e00b-442b-ac6b-b2189ff75132" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.168951 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.1675683 podStartE2EDuration="10.1675683s" podCreationTimestamp="2026-01-29 16:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:28.500362413 +0000 UTC m=+110.900947067" watchObservedRunningTime="2026-01-29 16:36:32.1675683 +0000 UTC m=+114.568152954" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.445631 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.445714 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:32 crc kubenswrapper[4746]: I0129 16:36:32.446101 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:32 crc kubenswrapper[4746]: E0129 16:36:32.446228 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:32 crc kubenswrapper[4746]: E0129 16:36:32.446462 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:32 crc kubenswrapper[4746]: E0129 16:36:32.446647 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:33 crc kubenswrapper[4746]: I0129 16:36:33.149702 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/1.log" Jan 29 16:36:33 crc kubenswrapper[4746]: I0129 16:36:33.445106 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:33 crc kubenswrapper[4746]: E0129 16:36:33.445393 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:34 crc kubenswrapper[4746]: I0129 16:36:34.445045 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:34 crc kubenswrapper[4746]: I0129 16:36:34.445247 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:34 crc kubenswrapper[4746]: I0129 16:36:34.445387 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:34 crc kubenswrapper[4746]: E0129 16:36:34.445606 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:34 crc kubenswrapper[4746]: E0129 16:36:34.447177 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:34 crc kubenswrapper[4746]: E0129 16:36:34.447335 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:34 crc kubenswrapper[4746]: I0129 16:36:34.447347 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.161776 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/3.log" Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.165421 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerStarted","Data":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.165976 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.209118 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podStartSLOduration=97.209081301 podStartE2EDuration="1m37.209081301s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:36:35.205793669 +0000 UTC m=+117.606378323" watchObservedRunningTime="2026-01-29 16:36:35.209081301 +0000 UTC m=+117.609665995" Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.445885 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:35 crc kubenswrapper[4746]: E0129 16:36:35.446057 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.481751 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f72wn"] Jan 29 16:36:35 crc kubenswrapper[4746]: I0129 16:36:35.481883 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:35 crc kubenswrapper[4746]: E0129 16:36:35.481975 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:36 crc kubenswrapper[4746]: I0129 16:36:36.444820 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:36 crc kubenswrapper[4746]: I0129 16:36:36.444883 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:36 crc kubenswrapper[4746]: E0129 16:36:36.445522 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:36 crc kubenswrapper[4746]: E0129 16:36:36.445789 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:37 crc kubenswrapper[4746]: I0129 16:36:37.445129 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:37 crc kubenswrapper[4746]: I0129 16:36:37.445177 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:37 crc kubenswrapper[4746]: E0129 16:36:37.445403 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:37 crc kubenswrapper[4746]: E0129 16:36:37.445607 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:38 crc kubenswrapper[4746]: I0129 16:36:38.445340 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:38 crc kubenswrapper[4746]: E0129 16:36:38.447117 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:38 crc kubenswrapper[4746]: I0129 16:36:38.447515 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:38 crc kubenswrapper[4746]: E0129 16:36:38.447637 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:38 crc kubenswrapper[4746]: E0129 16:36:38.454057 4746 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 16:36:38 crc kubenswrapper[4746]: E0129 16:36:38.539694 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:36:39 crc kubenswrapper[4746]: I0129 16:36:39.445426 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:39 crc kubenswrapper[4746]: E0129 16:36:39.446060 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:39 crc kubenswrapper[4746]: I0129 16:36:39.445514 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:39 crc kubenswrapper[4746]: E0129 16:36:39.446528 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:40 crc kubenswrapper[4746]: I0129 16:36:40.446034 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:40 crc kubenswrapper[4746]: E0129 16:36:40.447408 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:40 crc kubenswrapper[4746]: I0129 16:36:40.448030 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:40 crc kubenswrapper[4746]: E0129 16:36:40.448389 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:41 crc kubenswrapper[4746]: I0129 16:36:41.445571 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:41 crc kubenswrapper[4746]: I0129 16:36:41.445613 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:41 crc kubenswrapper[4746]: E0129 16:36:41.445821 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:41 crc kubenswrapper[4746]: E0129 16:36:41.446322 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:42 crc kubenswrapper[4746]: I0129 16:36:42.444971 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:42 crc kubenswrapper[4746]: I0129 16:36:42.445040 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:42 crc kubenswrapper[4746]: E0129 16:36:42.445264 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:42 crc kubenswrapper[4746]: E0129 16:36:42.445578 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:43 crc kubenswrapper[4746]: I0129 16:36:43.445615 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:43 crc kubenswrapper[4746]: I0129 16:36:43.445671 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:43 crc kubenswrapper[4746]: E0129 16:36:43.445859 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:43 crc kubenswrapper[4746]: E0129 16:36:43.446045 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:43 crc kubenswrapper[4746]: E0129 16:36:43.541245 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:36:44 crc kubenswrapper[4746]: I0129 16:36:44.446394 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:44 crc kubenswrapper[4746]: E0129 16:36:44.446640 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:44 crc kubenswrapper[4746]: I0129 16:36:44.447062 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:44 crc kubenswrapper[4746]: E0129 16:36:44.447349 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:45 crc kubenswrapper[4746]: I0129 16:36:45.445382 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:45 crc kubenswrapper[4746]: I0129 16:36:45.445384 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:45 crc kubenswrapper[4746]: I0129 16:36:45.446382 4746 scope.go:117] "RemoveContainer" containerID="9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10" Jan 29 16:36:45 crc kubenswrapper[4746]: E0129 16:36:45.446478 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:45 crc kubenswrapper[4746]: E0129 16:36:45.446538 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:46 crc kubenswrapper[4746]: I0129 16:36:46.216989 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/1.log" Jan 29 16:36:46 crc kubenswrapper[4746]: I0129 16:36:46.217547 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerStarted","Data":"d5ef49e5ef0c78740093a11d20b861a3b623803368308cfc198a4d068e879da9"} Jan 29 16:36:46 crc kubenswrapper[4746]: I0129 16:36:46.445085 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:46 crc kubenswrapper[4746]: I0129 16:36:46.445172 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:46 crc kubenswrapper[4746]: E0129 16:36:46.445363 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:46 crc kubenswrapper[4746]: E0129 16:36:46.445480 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:47 crc kubenswrapper[4746]: I0129 16:36:47.445290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:47 crc kubenswrapper[4746]: I0129 16:36:47.445290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:47 crc kubenswrapper[4746]: E0129 16:36:47.445562 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:36:47 crc kubenswrapper[4746]: E0129 16:36:47.445703 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-f72wn" podUID="ed3cddee-6243-41b8-9ac3-7ef6772d2960" Jan 29 16:36:48 crc kubenswrapper[4746]: I0129 16:36:48.444981 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:48 crc kubenswrapper[4746]: I0129 16:36:48.444992 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:48 crc kubenswrapper[4746]: E0129 16:36:48.446295 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:36:48 crc kubenswrapper[4746]: E0129 16:36:48.446443 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:36:49 crc kubenswrapper[4746]: I0129 16:36:49.445390 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:36:49 crc kubenswrapper[4746]: I0129 16:36:49.445390 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:36:49 crc kubenswrapper[4746]: I0129 16:36:49.449033 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:36:49 crc kubenswrapper[4746]: I0129 16:36:49.449086 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:36:49 crc kubenswrapper[4746]: I0129 16:36:49.449055 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:36:49 crc kubenswrapper[4746]: I0129 16:36:49.449276 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:36:50 crc kubenswrapper[4746]: I0129 16:36:50.445623 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:36:50 crc kubenswrapper[4746]: I0129 16:36:50.445640 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:36:50 crc kubenswrapper[4746]: I0129 16:36:50.450385 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:36:50 crc kubenswrapper[4746]: I0129 16:36:50.451729 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:36:52 crc kubenswrapper[4746]: I0129 16:36:52.589093 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.701306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.801066 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.801754 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.801831 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9v9dn"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.802807 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.803332 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.803926 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.807336 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.809038 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.809513 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.810742 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mkwgx"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.812113 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.811156 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.813479 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.813535 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.814528 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.814926 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.815058 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.815226 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.815576 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.816141 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.817005 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8q4kh"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.817503 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.819077 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t9srx"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.819658 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.822996 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vcrws"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.823507 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.823631 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.824813 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.828595 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.829259 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.829619 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.830241 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.830349 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.830416 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.830770 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.831049 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.832335 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.832721 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.838499 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.838536 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.838809 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.839055 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.839136 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.839250 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.839393 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.842140 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.842502 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.845395 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.846018 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.847003 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.848239 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.860546 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.860755 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.862285 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.878053 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljsjj"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.878478 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8gghv"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.878807 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ds4g2"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.879146 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.879820 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.879922 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.881285 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.881829 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.882055 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.882388 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.882654 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.882756 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.882900 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.883061 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.883162 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.883863 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.887432 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t4j2d"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.888004 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ggm4h"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.888298 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.888636 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.889279 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.889492 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.893750 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.894928 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.895590 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.895926 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896075 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896267 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896410 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896533 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896661 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896814 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.896928 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.897055 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.897217 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.898081 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.898831 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.904292 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.904837 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-audit-dir\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905769 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-etcd-client\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905793 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b97\" (UniqueName: \"kubernetes.io/projected/d459a560-d49c-42c7-afe1-22dc6a872265-kube-api-access-s5b97\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905817 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-serving-cert\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905843 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59l5\" (UniqueName: \"kubernetes.io/projected/bdc1f9ce-e4b4-492f-b909-d84c33c52543-kube-api-access-c59l5\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905872 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d459a560-d49c-42c7-afe1-22dc6a872265-serving-cert\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905889 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdc1f9ce-e4b4-492f-b909-d84c33c52543-trusted-ca\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905909 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-encryption-config\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905925 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc1f9ce-e4b4-492f-b909-d84c33c52543-config\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905947 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905963 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppncf\" (UniqueName: \"kubernetes.io/projected/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-kube-api-access-ppncf\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905980 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc1f9ce-e4b4-492f-b909-d84c33c52543-serving-cert\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.905997 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-audit-policies\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.906013 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.906045 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d459a560-d49c-42c7-afe1-22dc6a872265-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.906739 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907138 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907225 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907467 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907530 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907613 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907775 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.907898 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.908038 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.908133 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.908284 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.908357 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.908413 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.908996 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.909128 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.909265 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.909378 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.909590 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.909690 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.909808 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910017 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910158 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910341 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910399 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910451 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910496 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910584 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910669 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910688 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910765 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910887 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910726 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.911089 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.910676 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.911227 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.911380 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.911586 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.911785 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.911890 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.913285 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.913427 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.916397 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.916805 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.923985 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9678f"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.924511 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.925117 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.925445 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.925705 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.930477 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.932942 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.944915 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wcd6d"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.947152 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.950865 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.955838 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.957474 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.957547 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.957599 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.962850 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.978831 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.980401 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.980851 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.981339 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.981179 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.981702 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.982109 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.982380 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.983995 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.984382 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.984623 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.984850 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.985022 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.986177 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.986680 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.987266 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.988622 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.988944 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khd9z"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.989793 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.990049 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.990268 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.990631 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.990949 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.994722 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-np5s4"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.995635 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.995965 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.997519 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh"] Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.998135 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:57 crc kubenswrapper[4746]: I0129 16:36:57.998544 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9v9dn"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.000153 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.001893 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bz2bz"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.002460 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.003240 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.004519 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.005158 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.006881 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-etcd-client\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.006927 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5b97\" (UniqueName: \"kubernetes.io/projected/d459a560-d49c-42c7-afe1-22dc6a872265-kube-api-access-s5b97\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.006952 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-serving-cert\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007007 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59l5\" (UniqueName: \"kubernetes.io/projected/bdc1f9ce-e4b4-492f-b909-d84c33c52543-kube-api-access-c59l5\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007049 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d459a560-d49c-42c7-afe1-22dc6a872265-serving-cert\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007073 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdc1f9ce-e4b4-492f-b909-d84c33c52543-trusted-ca\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007103 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-encryption-config\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007124 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc1f9ce-e4b4-492f-b909-d84c33c52543-config\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007148 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007178 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppncf\" (UniqueName: \"kubernetes.io/projected/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-kube-api-access-ppncf\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007223 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc1f9ce-e4b4-492f-b909-d84c33c52543-serving-cert\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007249 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-audit-policies\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007268 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007311 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d459a560-d49c-42c7-afe1-22dc6a872265-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007337 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-audit-dir\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.007429 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-audit-dir\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.008595 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.009409 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc1f9ce-e4b4-492f-b909-d84c33c52543-config\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.010500 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vcrws"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.011135 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d459a560-d49c-42c7-afe1-22dc6a872265-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.011244 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.011305 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-audit-policies\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.012169 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdc1f9ce-e4b4-492f-b909-d84c33c52543-trusted-ca\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.012304 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.012356 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.014108 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dd6kd"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.014239 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-encryption-config\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.017491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-serving-cert\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.017993 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-etcd-client\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.018917 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdc1f9ce-e4b4-492f-b909-d84c33c52543-serving-cert\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022152 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8q4kh"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022184 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t4j2d"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022212 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022224 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022238 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljsjj"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022248 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8gghv"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022359 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.022687 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t9srx"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.023729 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khd9z"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.025359 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.025640 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-np5s4"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.026581 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.027874 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.030796 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.030974 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9678f"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.033458 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wcd6d"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.035146 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d459a560-d49c-42c7-afe1-22dc6a872265-serving-cert\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.037149 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mkwgx"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.038676 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ds4g2"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.040861 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.054024 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.054105 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.054134 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.055611 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.060237 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.064122 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.064719 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.072755 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.072827 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zkrtb"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.074534 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dd6kd"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.074651 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.081329 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.083402 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rrn9l"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.084073 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.084583 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.086279 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.088302 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.089777 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.092480 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bz2bz"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.092521 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rrn9l"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.095292 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.096942 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.098093 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2shkn"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.099290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.099708 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2shkn"] Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.104470 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.164254 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.184431 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.205455 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.208823 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b660e001-5d85-4ab4-a617-82082e447e2a-config\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.208864 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69d5w\" (UniqueName: \"kubernetes.io/projected/5f12f6a1-8b0c-4e43-bdce-982701cd9478-kube-api-access-69d5w\") pod \"cluster-samples-operator-665b6dd947-6x98j\" (UID: \"5f12f6a1-8b0c-4e43-bdce-982701cd9478\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.208894 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99wf\" (UniqueName: \"kubernetes.io/projected/939b72c6-643d-4e50-8223-7596ca0c5a6a-kube-api-access-s99wf\") pod \"downloads-7954f5f757-9v9dn\" (UID: \"939b72c6-643d-4e50-8223-7596ca0c5a6a\") " pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.208924 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.208950 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-client-ca\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.208973 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5e34714-dec2-46cf-b5b4-514f66525546-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8gghv\" (UID: \"b5e34714-dec2-46cf-b5b4-514f66525546\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209090 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-config\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209354 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209400 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20be07e6-cf06-443d-b49f-f893798034da-serving-cert\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209435 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-apiservice-cert\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209469 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdc7\" (UniqueName: \"kubernetes.io/projected/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-kube-api-access-dwdc7\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209503 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4b33d4-6ef0-465d-99be-20a2816090f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209530 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-tmpfs\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209567 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-trusted-ca\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209640 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27c3b17b-1acd-412d-90eb-5782d6db606e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209681 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f10316f-35a6-4906-8d11-5bed4a8b9572-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209747 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-config\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209777 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-config\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209827 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f6be6c-9a7f-4aae-b850-565c42dd012d-serving-cert\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209878 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08d2e95-cdc5-4934-94ec-2cdb56479e29-serving-cert\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209909 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20be07e6-cf06-443d-b49f-f893798034da-etcd-client\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209946 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwtg8\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-kube-api-access-pwtg8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209972 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f10316f-35a6-4906-8d11-5bed4a8b9572-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.209994 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-service-ca-bundle\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210020 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210085 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-client-ca\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210121 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fht\" (UniqueName: \"kubernetes.io/projected/4d4b33d4-6ef0-465d-99be-20a2816090f9-kube-api-access-k2fht\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210152 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b660e001-5d85-4ab4-a617-82082e447e2a-auth-proxy-config\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210245 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27c3b17b-1acd-412d-90eb-5782d6db606e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210296 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210321 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvgbg\" (UniqueName: \"kubernetes.io/projected/11f6be6c-9a7f-4aae-b850-565c42dd012d-kube-api-access-lvgbg\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210346 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-etcd-service-ca\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210387 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb497\" (UniqueName: \"kubernetes.io/projected/b660e001-5d85-4ab4-a617-82082e447e2a-kube-api-access-zb497\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210412 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbrrb\" (UniqueName: \"kubernetes.io/projected/20be07e6-cf06-443d-b49f-f893798034da-kube-api-access-kbrrb\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210438 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvk2k\" (UniqueName: \"kubernetes.io/projected/6ba9e438-8285-4efb-9125-db88ba0cc4c7-kube-api-access-tvk2k\") pod \"dns-operator-744455d44c-mkwgx\" (UID: \"6ba9e438-8285-4efb-9125-db88ba0cc4c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b660e001-5d85-4ab4-a617-82082e447e2a-machine-approver-tls\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210491 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4b33d4-6ef0-465d-99be-20a2816090f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210796 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f12f6a1-8b0c-4e43-bdce-982701cd9478-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6x98j\" (UID: \"5f12f6a1-8b0c-4e43-bdce-982701cd9478\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210878 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xbcf\" (UniqueName: \"kubernetes.io/projected/8f10316f-35a6-4906-8d11-5bed4a8b9572-kube-api-access-7xbcf\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.210947 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-etcd-ca\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-certificates\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.211042 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:58.71102571 +0000 UTC m=+141.111610344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211153 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmhmq\" (UniqueName: \"kubernetes.io/projected/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-kube-api-access-bmhmq\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211232 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fslng\" (UniqueName: \"kubernetes.io/projected/b08d2e95-cdc5-4934-94ec-2cdb56479e29-kube-api-access-fslng\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211279 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-serving-cert\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211308 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj2m5\" (UniqueName: \"kubernetes.io/projected/b5e34714-dec2-46cf-b5b4-514f66525546-kube-api-access-jj2m5\") pod \"multus-admission-controller-857f4d67dd-8gghv\" (UID: \"b5e34714-dec2-46cf-b5b4-514f66525546\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211336 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9qxd\" (UniqueName: \"kubernetes.io/projected/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-kube-api-access-t9qxd\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211377 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-bound-sa-token\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211403 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f10316f-35a6-4906-8d11-5bed4a8b9572-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-config\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-webhook-cert\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba9e438-8285-4efb-9125-db88ba0cc4c7-metrics-tls\") pod \"dns-operator-744455d44c-mkwgx\" (UID: \"6ba9e438-8285-4efb-9125-db88ba0cc4c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.211560 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-tls\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.238896 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.244305 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.265127 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.290480 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.304439 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312506 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.312690 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:58.812654374 +0000 UTC m=+141.213239018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312756 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-config\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312783 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7becf4a7-7ad1-4d20-9707-a28330253dfd-secret-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312804 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9s2b\" (UniqueName: \"kubernetes.io/projected/5d373715-9357-4191-bd8d-b87840962375-kube-api-access-t9s2b\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312840 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08d2e95-cdc5-4934-94ec-2cdb56479e29-serving-cert\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.312920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20be07e6-cf06-443d-b49f-f893798034da-etcd-client\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313050 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-etcd-client\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313134 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-service-ca-bundle\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313165 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313242 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-service-ca-bundle\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313268 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313295 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-client-ca\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313325 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313361 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fht\" (UniqueName: \"kubernetes.io/projected/4d4b33d4-6ef0-465d-99be-20a2816090f9-kube-api-access-k2fht\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313387 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlpzw\" (UniqueName: \"kubernetes.io/projected/e37f58d7-0e3a-4873-b381-e81be85e8f3f-kube-api-access-jlpzw\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313425 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313449 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwkvv\" (UniqueName: \"kubernetes.io/projected/8f3741d9-db6d-4387-874f-2cf7b81fb737-kube-api-access-lwkvv\") pod \"migrator-59844c95c7-wts2v\" (UID: \"8f3741d9-db6d-4387-874f-2cf7b81fb737\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313472 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-default-certificate\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313532 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313555 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xjx\" (UniqueName: \"kubernetes.io/projected/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-kube-api-access-44xjx\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313584 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6hzc\" (UniqueName: \"kubernetes.io/projected/b4842de2-18f5-4f78-813f-6cbcb7b1b740-kube-api-access-v6hzc\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313633 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313689 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313723 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvgbg\" (UniqueName: \"kubernetes.io/projected/11f6be6c-9a7f-4aae-b850-565c42dd012d-kube-api-access-lvgbg\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313748 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-etcd-service-ca\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313779 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cad57b29-4969-46ed-a38d-479fa8848fa9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313812 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de2aede4-40e2-47b3-8f78-c28505411b6b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313842 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5a47724-9572-4886-a2d9-36a0a56b4b20-config-volume\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313871 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42v8p\" (UniqueName: \"kubernetes.io/projected/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-kube-api-access-42v8p\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313902 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b660e001-5d85-4ab4-a617-82082e447e2a-machine-approver-tls\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313926 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313957 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f12f6a1-8b0c-4e43-bdce-982701cd9478-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6x98j\" (UID: \"5f12f6a1-8b0c-4e43-bdce-982701cd9478\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.313989 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e37f58d7-0e3a-4873-b381-e81be85e8f3f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314020 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xbcf\" (UniqueName: \"kubernetes.io/projected/8f10316f-35a6-4906-8d11-5bed4a8b9572-kube-api-access-7xbcf\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314044 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-service-ca\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314072 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-node-bootstrap-token\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314126 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-srv-cert\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314148 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-certificates\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmhmq\" (UniqueName: \"kubernetes.io/projected/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-kube-api-access-bmhmq\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314258 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad57b29-4969-46ed-a38d-479fa8848fa9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314281 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjknz\" (UniqueName: \"kubernetes.io/projected/608c383e-45e1-43dd-b8ad-9a7499953754-kube-api-access-kjknz\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314304 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314326 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml7tr\" (UniqueName: \"kubernetes.io/projected/33f6daff-2886-42dd-95ed-9aeb6aad3ec0-kube-api-access-ml7tr\") pod \"package-server-manager-789f6589d5-rmcph\" (UID: \"33f6daff-2886-42dd-95ed-9aeb6aad3ec0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-bound-sa-token\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33f6daff-2886-42dd-95ed-9aeb6aad3ec0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rmcph\" (UID: \"33f6daff-2886-42dd-95ed-9aeb6aad3ec0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314398 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-mountpoint-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314425 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314455 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-config\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314485 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de2aede4-40e2-47b3-8f78-c28505411b6b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314512 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba9e438-8285-4efb-9125-db88ba0cc4c7-metrics-tls\") pod \"dns-operator-744455d44c-mkwgx\" (UID: \"6ba9e438-8285-4efb-9125-db88ba0cc4c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314510 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-service-ca-bundle\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314541 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-image-import-ca\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314619 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-tls\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314661 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314755 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-client-ca\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.314863 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-etcd-service-ca\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.315216 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:58.815164445 +0000 UTC m=+141.215749229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.315749 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-serving-cert\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.315800 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.315838 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnvj\" (UniqueName: \"kubernetes.io/projected/b1d40aef-51e1-48d1-ac44-5ca93dd7b612-kube-api-access-lxnvj\") pod \"control-plane-machine-set-operator-78cbb6b69f-gkxmb\" (UID: \"b1d40aef-51e1-48d1-ac44-5ca93dd7b612\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.315891 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jt8\" (UniqueName: \"kubernetes.io/projected/9e56505d-05bd-4223-a84d-4622ce4267ee-kube-api-access-n8jt8\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.315961 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69d5w\" (UniqueName: \"kubernetes.io/projected/5f12f6a1-8b0c-4e43-bdce-982701cd9478-kube-api-access-69d5w\") pod \"cluster-samples-operator-665b6dd947-6x98j\" (UID: \"5f12f6a1-8b0c-4e43-bdce-982701cd9478\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnx8w\" (UniqueName: \"kubernetes.io/projected/81f1a95c-a6e2-49f5-9adc-6202cb477155-kube-api-access-mnx8w\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316138 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-encryption-config\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316210 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-csi-data-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316257 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5e34714-dec2-46cf-b5b4-514f66525546-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8gghv\" (UID: \"b5e34714-dec2-46cf-b5b4-514f66525546\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316283 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-client-ca\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316304 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4842de2-18f5-4f78-813f-6cbcb7b1b740-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316448 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-certificates\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316440 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e37f58d7-0e3a-4873-b381-e81be85e8f3f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316553 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5a47724-9572-4886-a2d9-36a0a56b4b20-metrics-tls\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316585 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316621 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1442f3f-48b3-4356-bcb0-773b64ccab8f-audit-dir\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316641 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-apiservice-cert\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316657 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-images\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.316688 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr96j\" (UniqueName: \"kubernetes.io/projected/7ec85074-8a89-495b-a55e-9a05cbaae62f-kube-api-access-zr96j\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317020 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317053 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20be07e6-cf06-443d-b49f-f893798034da-serving-cert\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317084 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f1a95c-a6e2-49f5-9adc-6202cb477155-config\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317117 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-metrics-certs\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317154 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317354 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20be07e6-cf06-443d-b49f-f893798034da-etcd-client\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317407 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-client-ca\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b1e0c98-de4a-4744-b713-4985cfe776b4-proxy-tls\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317538 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvh5\" (UniqueName: \"kubernetes.io/projected/f1442f3f-48b3-4356-bcb0-773b64ccab8f-kube-api-access-7pvh5\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317590 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-trusted-ca\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317640 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-config\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27c3b17b-1acd-412d-90eb-5782d6db606e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317769 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-config\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317852 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.317950 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f6be6c-9a7f-4aae-b850-565c42dd012d-serving-cert\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318030 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwtg8\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-kube-api-access-pwtg8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318218 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f10316f-35a6-4906-8d11-5bed4a8b9572-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318377 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-audit\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318422 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-trusted-ca-bundle\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318532 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b1e0c98-de4a-4744-b713-4985cfe776b4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318565 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318598 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-stats-auth\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318653 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b660e001-5d85-4ab4-a617-82082e447e2a-auth-proxy-config\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318689 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s4hp\" (UniqueName: \"kubernetes.io/projected/058aae17-e28d-48f3-83c1-9190c9f45a89-kube-api-access-8s4hp\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318701 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-config\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318893 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27c3b17b-1acd-412d-90eb-5782d6db606e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-dir\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.318967 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-plugins-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319392 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27c3b17b-1acd-412d-90eb-5782d6db606e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319608 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319694 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08d2e95-cdc5-4934-94ec-2cdb56479e29-serving-cert\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319700 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-srv-cert\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319758 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-trusted-ca\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319787 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-registration-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319927 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb497\" (UniqueName: \"kubernetes.io/projected/b660e001-5d85-4ab4-a617-82082e447e2a-kube-api-access-zb497\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.319963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbrrb\" (UniqueName: \"kubernetes.io/projected/20be07e6-cf06-443d-b49f-f893798034da-kube-api-access-kbrrb\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320008 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvk2k\" (UniqueName: \"kubernetes.io/projected/6ba9e438-8285-4efb-9125-db88ba0cc4c7-kube-api-access-tvk2k\") pod \"dns-operator-744455d44c-mkwgx\" (UID: \"6ba9e438-8285-4efb-9125-db88ba0cc4c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320292 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4b33d4-6ef0-465d-99be-20a2816090f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320329 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-socket-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320359 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de2aede4-40e2-47b3-8f78-c28505411b6b-config\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320393 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-proxy-tls\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320423 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-config\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320461 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-etcd-ca\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320492 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad57b29-4969-46ed-a38d-479fa8848fa9-config\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320526 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7kz\" (UniqueName: \"kubernetes.io/projected/a5a47724-9572-4886-a2d9-36a0a56b4b20-kube-api-access-sk7kz\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320552 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c86p8\" (UniqueName: \"kubernetes.io/projected/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-kube-api-access-c86p8\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320586 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fslng\" (UniqueName: \"kubernetes.io/projected/b08d2e95-cdc5-4934-94ec-2cdb56479e29-kube-api-access-fslng\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320612 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-images\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320663 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-serving-cert\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320687 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvd2\" (UniqueName: \"kubernetes.io/projected/bac6ed8f-e181-484c-861d-36ba4b695bca-kube-api-access-xrvd2\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.320729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-serving-cert\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.321023 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj2m5\" (UniqueName: \"kubernetes.io/projected/b5e34714-dec2-46cf-b5b4-514f66525546-kube-api-access-jj2m5\") pod \"multus-admission-controller-857f4d67dd-8gghv\" (UID: \"b5e34714-dec2-46cf-b5b4-514f66525546\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.323283 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba9e438-8285-4efb-9125-db88ba0cc4c7-metrics-tls\") pod \"dns-operator-744455d44c-mkwgx\" (UID: \"6ba9e438-8285-4efb-9125-db88ba0cc4c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.323661 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.323697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d4b33d4-6ef0-465d-99be-20a2816090f9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.321358 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-etcd-ca\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.323927 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b660e001-5d85-4ab4-a617-82082e447e2a-auth-proxy-config\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324126 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f12f6a1-8b0c-4e43-bdce-982701cd9478-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6x98j\" (UID: \"5f12f6a1-8b0c-4e43-bdce-982701cd9478\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9qxd\" (UniqueName: \"kubernetes.io/projected/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-kube-api-access-t9qxd\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324286 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20be07e6-cf06-443d-b49f-f893798034da-serving-cert\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27c3b17b-1acd-412d-90eb-5782d6db606e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324488 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f10316f-35a6-4906-8d11-5bed4a8b9572-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324605 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-webhook-cert\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324663 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f6be6c-9a7f-4aae-b850-565c42dd012d-serving-cert\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324741 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b5e34714-dec2-46cf-b5b4-514f66525546-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8gghv\" (UID: \"b5e34714-dec2-46cf-b5b4-514f66525546\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324765 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-policies\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324866 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.324909 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f10316f-35a6-4906-8d11-5bed4a8b9572-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.325068 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.325163 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4842de2-18f5-4f78-813f-6cbcb7b1b740-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.325345 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b660e001-5d85-4ab4-a617-82082e447e2a-config\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.326398 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-tls\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.327324 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b660e001-5d85-4ab4-a617-82082e447e2a-machine-approver-tls\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.331365 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-webhook-cert\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.331875 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-serving-cert\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.332129 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-apiservice-cert\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.333730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvt2s\" (UniqueName: \"kubernetes.io/projected/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-kube-api-access-tvt2s\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.333925 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/058aae17-e28d-48f3-83c1-9190c9f45a89-cert\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.334461 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b660e001-5d85-4ab4-a617-82082e447e2a-config\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.334718 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm95v\" (UniqueName: \"kubernetes.io/projected/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-kube-api-access-hm95v\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.334801 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngnt\" (UniqueName: \"kubernetes.io/projected/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-kube-api-access-dngnt\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.334892 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99wf\" (UniqueName: \"kubernetes.io/projected/939b72c6-643d-4e50-8223-7596ca0c5a6a-kube-api-access-s99wf\") pod \"downloads-7954f5f757-9v9dn\" (UID: \"939b72c6-643d-4e50-8223-7596ca0c5a6a\") " pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335041 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8rm\" (UniqueName: \"kubernetes.io/projected/4b1e0c98-de4a-4744-b713-4985cfe776b4-kube-api-access-9b8rm\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335176 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-oauth-config\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335267 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335303 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f1442f3f-48b3-4356-bcb0-773b64ccab8f-node-pullsecrets\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335326 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-config\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335352 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e37f58d7-0e3a-4873-b381-e81be85e8f3f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335381 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1d40aef-51e1-48d1-ac44-5ca93dd7b612-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gkxmb\" (UID: \"b1d40aef-51e1-48d1-ac44-5ca93dd7b612\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335405 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335429 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-config\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335457 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-key\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335670 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-cabundle\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335704 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-oauth-serving-cert\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335775 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-certs\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335859 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckbs\" (UniqueName: \"kubernetes.io/projected/7becf4a7-7ad1-4d20-9707-a28330253dfd-kube-api-access-cckbs\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.335981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f1a95c-a6e2-49f5-9adc-6202cb477155-serving-cert\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336140 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwdc7\" (UniqueName: \"kubernetes.io/projected/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-kube-api-access-dwdc7\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336210 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4b33d4-6ef0-465d-99be-20a2816090f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336472 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-tmpfs\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336743 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f10316f-35a6-4906-8d11-5bed4a8b9572-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336849 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.336904 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-config\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.337075 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f6be6c-9a7f-4aae-b850-565c42dd012d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.338785 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-tmpfs\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.338849 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-config\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.338901 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20be07e6-cf06-443d-b49f-f893798034da-config\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.339766 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.340031 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f10316f-35a6-4906-8d11-5bed4a8b9572-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.342475 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4b33d4-6ef0-465d-99be-20a2816090f9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.347358 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.365512 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.384397 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.403288 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.433572 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.437598 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.437839 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:58.937807128 +0000 UTC m=+141.338391792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.438163 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4842de2-18f5-4f78-813f-6cbcb7b1b740-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.438357 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvt2s\" (UniqueName: \"kubernetes.io/projected/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-kube-api-access-tvt2s\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.438511 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm95v\" (UniqueName: \"kubernetes.io/projected/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-kube-api-access-hm95v\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.438678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngnt\" (UniqueName: \"kubernetes.io/projected/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-kube-api-access-dngnt\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.438842 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/058aae17-e28d-48f3-83c1-9190c9f45a89-cert\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.438997 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8rm\" (UniqueName: \"kubernetes.io/projected/4b1e0c98-de4a-4744-b713-4985cfe776b4-kube-api-access-9b8rm\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439147 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-oauth-config\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439291 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4842de2-18f5-4f78-813f-6cbcb7b1b740-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439411 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f1442f3f-48b3-4356-bcb0-773b64ccab8f-node-pullsecrets\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-config\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439646 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439688 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e37f58d7-0e3a-4873-b381-e81be85e8f3f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439725 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1d40aef-51e1-48d1-ac44-5ca93dd7b612-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gkxmb\" (UID: \"b1d40aef-51e1-48d1-ac44-5ca93dd7b612\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439765 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-key\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439798 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-cabundle\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439828 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-oauth-serving-cert\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-certs\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439906 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439944 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.439981 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckbs\" (UniqueName: \"kubernetes.io/projected/7becf4a7-7ad1-4d20-9707-a28330253dfd-kube-api-access-cckbs\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440013 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f1a95c-a6e2-49f5-9adc-6202cb477155-serving-cert\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440066 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440100 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-config\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440131 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7becf4a7-7ad1-4d20-9707-a28330253dfd-secret-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440162 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9s2b\" (UniqueName: \"kubernetes.io/projected/5d373715-9357-4191-bd8d-b87840962375-kube-api-access-t9s2b\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440238 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-etcd-client\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440282 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-service-ca-bundle\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440355 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440488 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlpzw\" (UniqueName: \"kubernetes.io/projected/e37f58d7-0e3a-4873-b381-e81be85e8f3f-kube-api-access-jlpzw\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440544 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwkvv\" (UniqueName: \"kubernetes.io/projected/8f3741d9-db6d-4387-874f-2cf7b81fb737-kube-api-access-lwkvv\") pod \"migrator-59844c95c7-wts2v\" (UID: \"8f3741d9-db6d-4387-874f-2cf7b81fb737\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440591 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-default-certificate\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440643 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440704 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440753 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6hzc\" (UniqueName: \"kubernetes.io/projected/b4842de2-18f5-4f78-813f-6cbcb7b1b740-kube-api-access-v6hzc\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440805 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440855 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xjx\" (UniqueName: \"kubernetes.io/projected/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-kube-api-access-44xjx\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440903 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cad57b29-4969-46ed-a38d-479fa8848fa9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.440952 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441018 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441142 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de2aede4-40e2-47b3-8f78-c28505411b6b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441228 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5a47724-9572-4886-a2d9-36a0a56b4b20-config-volume\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441266 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441298 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42v8p\" (UniqueName: \"kubernetes.io/projected/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-kube-api-access-42v8p\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e37f58d7-0e3a-4873-b381-e81be85e8f3f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-service-ca\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441409 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441470 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad57b29-4969-46ed-a38d-479fa8848fa9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441504 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-node-bootstrap-token\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441536 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-srv-cert\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441567 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441604 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjknz\" (UniqueName: \"kubernetes.io/projected/608c383e-45e1-43dd-b8ad-9a7499953754-kube-api-access-kjknz\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441626 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-config\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.442126 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f1442f3f-48b3-4356-bcb0-773b64ccab8f-node-pullsecrets\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.442311 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.441637 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.442389 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:58.942369436 +0000 UTC m=+141.342954320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.442682 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.442702 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-service-ca-bundle\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.442903 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml7tr\" (UniqueName: \"kubernetes.io/projected/33f6daff-2886-42dd-95ed-9aeb6aad3ec0-kube-api-access-ml7tr\") pod \"package-server-manager-789f6589d5-rmcph\" (UID: \"33f6daff-2886-42dd-95ed-9aeb6aad3ec0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.443081 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-mountpoint-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.443354 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33f6daff-2886-42dd-95ed-9aeb6aad3ec0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rmcph\" (UID: \"33f6daff-2886-42dd-95ed-9aeb6aad3ec0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.443649 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.443885 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de2aede4-40e2-47b3-8f78-c28505411b6b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.444061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-image-import-ca\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.444338 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-serving-cert\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.443272 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-mountpoint-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.444446 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.444531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.444873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnvj\" (UniqueName: \"kubernetes.io/projected/b1d40aef-51e1-48d1-ac44-5ca93dd7b612-kube-api-access-lxnvj\") pod \"control-plane-machine-set-operator-78cbb6b69f-gkxmb\" (UID: \"b1d40aef-51e1-48d1-ac44-5ca93dd7b612\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.445047 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnx8w\" (UniqueName: \"kubernetes.io/projected/81f1a95c-a6e2-49f5-9adc-6202cb477155-kube-api-access-mnx8w\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.445288 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jt8\" (UniqueName: \"kubernetes.io/projected/9e56505d-05bd-4223-a84d-4622ce4267ee-kube-api-access-n8jt8\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.445592 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-encryption-config\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.445891 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-csi-data-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446046 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4842de2-18f5-4f78-813f-6cbcb7b1b740-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446368 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446284 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446393 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e37f58d7-0e3a-4873-b381-e81be85e8f3f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446642 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5a47724-9572-4886-a2d9-36a0a56b4b20-metrics-tls\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446761 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1442f3f-48b3-4356-bcb0-773b64ccab8f-audit-dir\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446867 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-images\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446931 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr96j\" (UniqueName: \"kubernetes.io/projected/7ec85074-8a89-495b-a55e-9a05cbaae62f-kube-api-access-zr96j\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447004 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f1a95c-a6e2-49f5-9adc-6202cb477155-config\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-metrics-certs\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447113 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447165 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b1e0c98-de4a-4744-b713-4985cfe776b4-proxy-tls\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pvh5\" (UniqueName: \"kubernetes.io/projected/f1442f3f-48b3-4356-bcb0-773b64ccab8f-kube-api-access-7pvh5\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.446110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1d40aef-51e1-48d1-ac44-5ca93dd7b612-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gkxmb\" (UID: \"b1d40aef-51e1-48d1-ac44-5ca93dd7b612\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447418 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-csi-data-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447439 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-audit\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447550 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447784 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-stats-auth\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447877 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-trusted-ca-bundle\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447967 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b1e0c98-de4a-4744-b713-4985cfe776b4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448021 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s4hp\" (UniqueName: \"kubernetes.io/projected/058aae17-e28d-48f3-83c1-9190c9f45a89-kube-api-access-8s4hp\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448110 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-dir\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448233 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-plugins-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448314 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448367 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-srv-cert\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448456 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-registration-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448627 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-socket-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448737 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-config\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448814 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de2aede4-40e2-47b3-8f78-c28505411b6b-config\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448892 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-proxy-tls\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448965 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7kz\" (UniqueName: \"kubernetes.io/projected/a5a47724-9572-4886-a2d9-36a0a56b4b20-kube-api-access-sk7kz\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.448977 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449007 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c86p8\" (UniqueName: \"kubernetes.io/projected/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-kube-api-access-c86p8\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449094 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad57b29-4969-46ed-a38d-479fa8848fa9-config\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.447152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449293 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-images\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449374 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-serving-cert\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrvd2\" (UniqueName: \"kubernetes.io/projected/bac6ed8f-e181-484c-861d-36ba4b695bca-kube-api-access-xrvd2\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449462 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449534 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-policies\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449577 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449690 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449860 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-registration-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449869 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-socket-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.449960 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.450358 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.450459 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f1442f3f-48b3-4356-bcb0-773b64ccab8f-audit-dir\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.450510 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de2aede4-40e2-47b3-8f78-c28505411b6b-config\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.450654 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-dir\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.451117 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-default-certificate\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.451366 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4842de2-18f5-4f78-813f-6cbcb7b1b740-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.451382 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/bac6ed8f-e181-484c-861d-36ba4b695bca-plugins-dir\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.451432 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b1e0c98-de4a-4744-b713-4985cfe776b4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.451822 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b1e0c98-de4a-4744-b713-4985cfe776b4-proxy-tls\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.452134 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.452394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-policies\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.452725 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de2aede4-40e2-47b3-8f78-c28505411b6b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.452951 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-images\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.453708 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-metrics-certs\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.453760 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-stats-auth\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.454015 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.455791 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.464671 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.484247 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.504564 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.513959 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e37f58d7-0e3a-4873-b381-e81be85e8f3f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.532058 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.544278 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.544277 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e37f58d7-0e3a-4873-b381-e81be85e8f3f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.550058 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.550247 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.050174343 +0000 UTC m=+141.450759037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.550501 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.551070 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.051016616 +0000 UTC m=+141.451601300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.564735 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.576224 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-srv-cert\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.584502 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.594108 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7becf4a7-7ad1-4d20-9707-a28330253dfd-secret-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.595660 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.596646 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.604864 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.623855 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.638487 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad57b29-4969-46ed-a38d-479fa8848fa9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.644534 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.653212 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.653475 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.153448324 +0000 UTC m=+141.554032988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.653553 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.654019 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.154006089 +0000 UTC m=+141.554590743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.664214 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.671562 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad57b29-4969-46ed-a38d-479fa8848fa9-config\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.685115 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.696515 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-etcd-client\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.704339 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.724485 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.728676 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-audit\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.744711 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.755786 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-image-import-ca\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.756141 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.756336 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.256303413 +0000 UTC m=+141.656888067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.756560 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.756960 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.256949691 +0000 UTC m=+141.657534355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.778012 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.782054 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.784954 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.797908 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-serving-cert\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.805986 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.825014 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.844965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.852856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f1442f3f-48b3-4356-bcb0-773b64ccab8f-encryption-config\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.858504 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.858746 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.358720449 +0000 UTC m=+141.759305113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.859607 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.860342 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.360329054 +0000 UTC m=+141.760913708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.865053 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.871655 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-config\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.885844 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.892696 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f1442f3f-48b3-4356-bcb0-773b64ccab8f-etcd-serving-ca\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.905651 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.926431 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.945130 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.963742 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.964589 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.464549371 +0000 UTC m=+141.865134025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.966683 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.975328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:58 crc kubenswrapper[4746]: E0129 16:36:58.976126 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.476091864 +0000 UTC m=+141.876676718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.984570 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:36:58 crc kubenswrapper[4746]: I0129 16:36:58.996468 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.003311 4746 request.go:700] Waited for 1.016716279s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0 Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.006112 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.011969 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.025081 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.031874 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-images\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.045066 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.065029 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.075794 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-proxy-tls\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.076505 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.076719 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.57668381 +0000 UTC m=+141.977268454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.076910 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.077327 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.577310917 +0000 UTC m=+141.977895561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.084511 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.105435 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.125143 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.137252 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.162639 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.164776 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.173227 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.179249 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.179358 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.679332224 +0000 UTC m=+142.079916868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.180301 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.180760 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.680750603 +0000 UTC m=+142.081335247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.184454 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.195290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-srv-cert\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.204152 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.224081 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.237293 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/33f6daff-2886-42dd-95ed-9aeb6aad3ec0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rmcph\" (UID: \"33f6daff-2886-42dd-95ed-9aeb6aad3ec0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.244944 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.255314 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-oauth-config\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.265293 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.280931 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.281173 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.781125622 +0000 UTC m=+142.181710306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.281901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.282547 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.782523321 +0000 UTC m=+142.183108085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.284511 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.291236 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-serving-cert\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.304022 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.313896 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-service-ca\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.328817 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.332283 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-trusted-ca-bundle\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.343978 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.351001 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-oauth-serving-cert\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.364937 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.371645 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-config\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.382630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.382791 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.882768227 +0000 UTC m=+142.283352871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.383300 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.383739 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.883729615 +0000 UTC m=+142.284314259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.384139 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.403355 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.408260 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81f1a95c-a6e2-49f5-9adc-6202cb477155-config\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.423824 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.439346 4746 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.439589 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/058aae17-e28d-48f3-83c1-9190c9f45a89-cert podName:058aae17-e28d-48f3-83c1-9190c9f45a89 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.939560997 +0000 UTC m=+142.340145641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/058aae17-e28d-48f3-83c1-9190c9f45a89-cert") pod "ingress-canary-rrn9l" (UID: "058aae17-e28d-48f3-83c1-9190c9f45a89") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.440530 4746 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.440603 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-key podName:7ec85074-8a89-495b-a55e-9a05cbaae62f nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.940584576 +0000 UTC m=+142.341169220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-key") pod "service-ca-9c57cc56f-bz2bz" (UID: "7ec85074-8a89-495b-a55e-9a05cbaae62f") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.440625 4746 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.440662 4746 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.440712 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-cabundle podName:7ec85074-8a89-495b-a55e-9a05cbaae62f nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.940688859 +0000 UTC m=+142.341273503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-cabundle") pod "service-ca-9c57cc56f-bz2bz" (UID: "7ec85074-8a89-495b-a55e-9a05cbaae62f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.440791 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-certs podName:5d373715-9357-4191-bd8d-b87840962375 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.940759551 +0000 UTC m=+142.341344225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-certs") pod "machine-config-server-zkrtb" (UID: "5d373715-9357-4191-bd8d-b87840962375") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.441733 4746 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.441797 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81f1a95c-a6e2-49f5-9adc-6202cb477155-serving-cert podName:81f1a95c-a6e2-49f5-9adc-6202cb477155 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.941778299 +0000 UTC m=+142.342362943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/81f1a95c-a6e2-49f5-9adc-6202cb477155-serving-cert") pod "service-ca-operator-777779d784-pfbjh" (UID: "81f1a95c-a6e2-49f5-9adc-6202cb477155") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.442011 4746 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.442063 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume podName:7becf4a7-7ad1-4d20-9707-a28330253dfd nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.942054957 +0000 UTC m=+142.342639601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume") pod "collect-profiles-29495070-4t48n" (UID: "7becf4a7-7ad1-4d20-9707-a28330253dfd") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.442698 4746 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.442827 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5a47724-9572-4886-a2d9-36a0a56b4b20-config-volume podName:a5a47724-9572-4886-a2d9-36a0a56b4b20 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.942813018 +0000 UTC m=+142.343397842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a5a47724-9572-4886-a2d9-36a0a56b4b20-config-volume") pod "dns-default-2shkn" (UID: "a5a47724-9572-4886-a2d9-36a0a56b4b20") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.442876 4746 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.443008 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-node-bootstrap-token podName:5d373715-9357-4191-bd8d-b87840962375 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.942996074 +0000 UTC m=+142.343580918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-node-bootstrap-token") pod "machine-config-server-zkrtb" (UID: "5d373715-9357-4191-bd8d-b87840962375") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.443609 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.447121 4746 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.447255 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5a47724-9572-4886-a2d9-36a0a56b4b20-metrics-tls podName:a5a47724-9572-4886-a2d9-36a0a56b4b20 nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.947241312 +0000 UTC m=+142.347825956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/a5a47724-9572-4886-a2d9-36a0a56b4b20-metrics-tls") pod "dns-default-2shkn" (UID: "a5a47724-9572-4886-a2d9-36a0a56b4b20") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.466032 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.483678 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.484611 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.485090 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.985055411 +0000 UTC m=+142.385640055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.485594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.486558 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:36:59.986526011 +0000 UTC m=+142.387110695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.504296 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.524277 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.545521 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.565471 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.584881 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.587151 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.587638 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.087607301 +0000 UTC m=+142.488191975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.588389 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.589294 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.089277828 +0000 UTC m=+142.489862512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.604697 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.642654 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5b97\" (UniqueName: \"kubernetes.io/projected/d459a560-d49c-42c7-afe1-22dc6a872265-kube-api-access-s5b97\") pod \"openshift-config-operator-7777fb866f-lwrzg\" (UID: \"d459a560-d49c-42c7-afe1-22dc6a872265\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.659823 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59l5\" (UniqueName: \"kubernetes.io/projected/bdc1f9ce-e4b4-492f-b909-d84c33c52543-kube-api-access-c59l5\") pod \"console-operator-58897d9998-ljsjj\" (UID: \"bdc1f9ce-e4b4-492f-b909-d84c33c52543\") " pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.679256 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppncf\" (UniqueName: \"kubernetes.io/projected/a79c9a1a-d4c9-411d-81cb-0a68d4134e53-kube-api-access-ppncf\") pod \"apiserver-7bbb656c7d-pfgqt\" (UID: \"a79c9a1a-d4c9-411d-81cb-0a68d4134e53\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.685155 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.689375 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.689639 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.189615237 +0000 UTC m=+142.590199891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.690378 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.690796 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.190783019 +0000 UTC m=+142.591367663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.705158 4746 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.724309 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.744398 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.765632 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.786281 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.791143 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.791679 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.291620201 +0000 UTC m=+142.692205025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.792520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.793095 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.293068092 +0000 UTC m=+142.693652746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.804777 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.826142 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.845044 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.860263 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.867035 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.885815 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.894808 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:36:59 crc kubenswrapper[4746]: E0129 16:36:59.897635 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.397582177 +0000 UTC m=+142.798166851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.905246 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.908020 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.924650 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.959803 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:36:59 crc kubenswrapper[4746]: I0129 16:36:59.987952 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fht\" (UniqueName: \"kubernetes.io/projected/4d4b33d4-6ef0-465d-99be-20a2816090f9-kube-api-access-k2fht\") pod \"kube-storage-version-migrator-operator-b67b599dd-8vt65\" (UID: \"4d4b33d4-6ef0-465d-99be-20a2816090f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998403 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5a47724-9572-4886-a2d9-36a0a56b4b20-metrics-tls\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998708 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/058aae17-e28d-48f3-83c1-9190c9f45a89-cert\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998779 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-key\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998807 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-cabundle\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998833 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-certs\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998864 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f1a95c-a6e2-49f5-9adc-6202cb477155-serving-cert\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998968 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.998998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.999043 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5a47724-9572-4886-a2d9-36a0a56b4b20-config-volume\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:36:59.999112 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-node-bootstrap-token\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:36:59.999754 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.499720706 +0000 UTC m=+142.900305390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.000172 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.000790 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5a47724-9572-4886-a2d9-36a0a56b4b20-config-volume\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.003010 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a5a47724-9572-4886-a2d9-36a0a56b4b20-metrics-tls\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.003773 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-key\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.003891 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ec85074-8a89-495b-a55e-9a05cbaae62f-signing-cabundle\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.003957 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/058aae17-e28d-48f3-83c1-9190c9f45a89-cert\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.004131 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-node-bootstrap-token\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.013428 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81f1a95c-a6e2-49f5-9adc-6202cb477155-serving-cert\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.013830 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5d373715-9357-4191-bd8d-b87840962375-certs\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.014275 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmhmq\" (UniqueName: \"kubernetes.io/projected/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-kube-api-access-bmhmq\") pod \"controller-manager-879f6c89f-8q4kh\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.022942 4746 request.go:700] Waited for 1.707888195s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.034260 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-bound-sa-token\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.047853 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvgbg\" (UniqueName: \"kubernetes.io/projected/11f6be6c-9a7f-4aae-b850-565c42dd012d-kube-api-access-lvgbg\") pod \"authentication-operator-69f744f599-vcrws\" (UID: \"11f6be6c-9a7f-4aae-b850-565c42dd012d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.076656 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xbcf\" (UniqueName: \"kubernetes.io/projected/8f10316f-35a6-4906-8d11-5bed4a8b9572-kube-api-access-7xbcf\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.083604 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69d5w\" (UniqueName: \"kubernetes.io/projected/5f12f6a1-8b0c-4e43-bdce-982701cd9478-kube-api-access-69d5w\") pod \"cluster-samples-operator-665b6dd947-6x98j\" (UID: \"5f12f6a1-8b0c-4e43-bdce-982701cd9478\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.099788 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.100008 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.599978982 +0000 UTC m=+143.000563626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.100215 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwtg8\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-kube-api-access-pwtg8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.100415 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.100840 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.600831247 +0000 UTC m=+143.001415891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.101017 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.114151 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.126485 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb497\" (UniqueName: \"kubernetes.io/projected/b660e001-5d85-4ab4-a617-82082e447e2a-kube-api-access-zb497\") pod \"machine-approver-56656f9798-jhshf\" (UID: \"b660e001-5d85-4ab4-a617-82082e447e2a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.146636 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbrrb\" (UniqueName: \"kubernetes.io/projected/20be07e6-cf06-443d-b49f-f893798034da-kube-api-access-kbrrb\") pod \"etcd-operator-b45778765-ds4g2\" (UID: \"20be07e6-cf06-443d-b49f-f893798034da\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.166241 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvk2k\" (UniqueName: \"kubernetes.io/projected/6ba9e438-8285-4efb-9125-db88ba0cc4c7-kube-api-access-tvk2k\") pod \"dns-operator-744455d44c-mkwgx\" (UID: \"6ba9e438-8285-4efb-9125-db88ba0cc4c7\") " pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.176470 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.182742 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fslng\" (UniqueName: \"kubernetes.io/projected/b08d2e95-cdc5-4934-94ec-2cdb56479e29-kube-api-access-fslng\") pod \"route-controller-manager-6576b87f9c-mwpz6\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.185453 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.197531 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.202305 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.203053 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.703031697 +0000 UTC m=+143.103616341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.204857 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9qxd\" (UniqueName: \"kubernetes.io/projected/17cd8f72-1a5a-4e40-92b5-0bc669d3002f-kube-api-access-t9qxd\") pod \"packageserver-d55dfcdfc-jcn6w\" (UID: \"17cd8f72-1a5a-4e40-92b5-0bc669d3002f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.211211 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ljsjj"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.227759 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj2m5\" (UniqueName: \"kubernetes.io/projected/b5e34714-dec2-46cf-b5b4-514f66525546-kube-api-access-jj2m5\") pod \"multus-admission-controller-857f4d67dd-8gghv\" (UID: \"b5e34714-dec2-46cf-b5b4-514f66525546\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.248210 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.248843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f10316f-35a6-4906-8d11-5bed4a8b9572-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-stl4w\" (UID: \"8f10316f-35a6-4906-8d11-5bed4a8b9572\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.277695 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99wf\" (UniqueName: \"kubernetes.io/projected/939b72c6-643d-4e50-8223-7596ca0c5a6a-kube-api-access-s99wf\") pod \"downloads-7954f5f757-9v9dn\" (UID: \"939b72c6-643d-4e50-8223-7596ca0c5a6a\") " pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.282881 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwdc7\" (UniqueName: \"kubernetes.io/projected/ffc36e25-41cc-4f86-b0d6-afb4a49feec6-kube-api-access-dwdc7\") pod \"openshift-apiserver-operator-796bbdcf4f-7jl46\" (UID: \"ffc36e25-41cc-4f86-b0d6-afb4a49feec6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.284399 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.296663 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.304596 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvt2s\" (UniqueName: \"kubernetes.io/projected/3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3-kube-api-access-tvt2s\") pod \"machine-api-operator-5694c8668f-t4j2d\" (UID: \"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.304705 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.305035 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.805021381 +0000 UTC m=+143.205606025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.306142 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" event={"ID":"bdc1f9ce-e4b4-492f-b909-d84c33c52543","Type":"ContainerStarted","Data":"02c0b598e24b369a8090ad60968318562a6605a36c7b1c26f2ff2edb8a5c4c3b"} Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.310653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" event={"ID":"d459a560-d49c-42c7-afe1-22dc6a872265","Type":"ContainerStarted","Data":"16a5efb68e942de9d292b76e84c30bb486d12a66220ec1191f0ff0375d13e220"} Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.323435 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.324147 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm95v\" (UniqueName: \"kubernetes.io/projected/ed2a4b0e-c66b-45d9-abe6-32cb1481062c-kube-api-access-hm95v\") pod \"olm-operator-6b444d44fb-kwf6d\" (UID: \"ed2a4b0e-c66b-45d9-abe6-32cb1481062c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.341155 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngnt\" (UniqueName: \"kubernetes.io/projected/97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c-kube-api-access-dngnt\") pod \"machine-config-operator-74547568cd-5h72h\" (UID: \"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.355774 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.356599 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vcrws"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.360855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8rm\" (UniqueName: \"kubernetes.io/projected/4b1e0c98-de4a-4744-b713-4985cfe776b4-kube-api-access-9b8rm\") pod \"machine-config-controller-84d6567774-4b9fx\" (UID: \"4b1e0c98-de4a-4744-b713-4985cfe776b4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.383450 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckbs\" (UniqueName: \"kubernetes.io/projected/7becf4a7-7ad1-4d20-9707-a28330253dfd-kube-api-access-cckbs\") pod \"collect-profiles-29495070-4t48n\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.384058 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" Jan 29 16:37:00 crc kubenswrapper[4746]: W0129 16:37:00.397416 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11f6be6c_9a7f_4aae_b850_565c42dd012d.slice/crio-81041a01d1e7ef33974935ccfa2c38d3f076c315a149613e5900ee1188a02dd4 WatchSource:0}: Error finding container 81041a01d1e7ef33974935ccfa2c38d3f076c315a149613e5900ee1188a02dd4: Status 404 returned error can't find the container with id 81041a01d1e7ef33974935ccfa2c38d3f076c315a149613e5900ee1188a02dd4 Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.403204 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8q4kh"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.404077 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9s2b\" (UniqueName: \"kubernetes.io/projected/5d373715-9357-4191-bd8d-b87840962375-kube-api-access-t9s2b\") pod \"machine-config-server-zkrtb\" (UID: \"5d373715-9357-4191-bd8d-b87840962375\") " pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.406564 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.407179 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:00.90714888 +0000 UTC m=+143.307733524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.421021 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlpzw\" (UniqueName: \"kubernetes.io/projected/e37f58d7-0e3a-4873-b381-e81be85e8f3f-kube-api-access-jlpzw\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.440637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6hzc\" (UniqueName: \"kubernetes.io/projected/b4842de2-18f5-4f78-813f-6cbcb7b1b740-kube-api-access-v6hzc\") pod \"openshift-controller-manager-operator-756b6f6bc6-lq59b\" (UID: \"b4842de2-18f5-4f78-813f-6cbcb7b1b740\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.466424 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xjx\" (UniqueName: \"kubernetes.io/projected/74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4-kube-api-access-44xjx\") pod \"catalog-operator-68c6474976-bc9l5\" (UID: \"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.483694 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cad57b29-4969-46ed-a38d-479fa8848fa9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8nlbc\" (UID: \"cad57b29-4969-46ed-a38d-479fa8848fa9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.494213 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.504856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de2aede4-40e2-47b3-8f78-c28505411b6b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-695pm\" (UID: \"de2aede4-40e2-47b3-8f78-c28505411b6b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.509939 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.510409 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.01038312 +0000 UTC m=+143.410967814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.515643 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.521693 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.522100 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42v8p\" (UniqueName: \"kubernetes.io/projected/2c7650ca-1e87-4a25-8a8e-dae70ea5719c-kube-api-access-42v8p\") pod \"router-default-5444994796-ggm4h\" (UID: \"2c7650ca-1e87-4a25-8a8e-dae70ea5719c\") " pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.522158 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.527852 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.533267 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.535527 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.538918 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.545259 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.550300 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.552900 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e37f58d7-0e3a-4873-b381-e81be85e8f3f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vp4mg\" (UID: \"e37f58d7-0e3a-4873-b381-e81be85e8f3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:37:00 crc kubenswrapper[4746]: W0129 16:37:00.555374 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb08d2e95_cdc5_4934_94ec_2cdb56479e29.slice/crio-2ca45fd2b09bceda4660c848c7e99b26786a1f76df1f65332c375df175c23a6f WatchSource:0}: Error finding container 2ca45fd2b09bceda4660c848c7e99b26786a1f76df1f65332c375df175c23a6f: Status 404 returned error can't find the container with id 2ca45fd2b09bceda4660c848c7e99b26786a1f76df1f65332c375df175c23a6f Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.566812 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.572083 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.572738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwkvv\" (UniqueName: \"kubernetes.io/projected/8f3741d9-db6d-4387-874f-2cf7b81fb737-kube-api-access-lwkvv\") pod \"migrator-59844c95c7-wts2v\" (UID: \"8f3741d9-db6d-4387-874f-2cf7b81fb737\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.578907 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.587893 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjknz\" (UniqueName: \"kubernetes.io/projected/608c383e-45e1-43dd-b8ad-9a7499953754-kube-api-access-kjknz\") pod \"marketplace-operator-79b997595-khd9z\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.592310 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.604309 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml7tr\" (UniqueName: \"kubernetes.io/projected/33f6daff-2886-42dd-95ed-9aeb6aad3ec0-kube-api-access-ml7tr\") pod \"package-server-manager-789f6589d5-rmcph\" (UID: \"33f6daff-2886-42dd-95ed-9aeb6aad3ec0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.606790 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.614112 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.620729 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.620845 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.621383 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.121364296 +0000 UTC m=+143.521948940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.629283 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.630443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnvj\" (UniqueName: \"kubernetes.io/projected/b1d40aef-51e1-48d1-ac44-5ca93dd7b612-kube-api-access-lxnvj\") pod \"control-plane-machine-set-operator-78cbb6b69f-gkxmb\" (UID: \"b1d40aef-51e1-48d1-ac44-5ca93dd7b612\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.644870 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnx8w\" (UniqueName: \"kubernetes.io/projected/81f1a95c-a6e2-49f5-9adc-6202cb477155-kube-api-access-mnx8w\") pod \"service-ca-operator-777779d784-pfbjh\" (UID: \"81f1a95c-a6e2-49f5-9adc-6202cb477155\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.653915 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.666756 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.670065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jt8\" (UniqueName: \"kubernetes.io/projected/9e56505d-05bd-4223-a84d-4622ce4267ee-kube-api-access-n8jt8\") pod \"oauth-openshift-558db77b4-9678f\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.693689 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6fjzf\" (UID: \"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.698927 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zkrtb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.703646 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c86p8\" (UniqueName: \"kubernetes.io/projected/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-kube-api-access-c86p8\") pod \"console-f9d7485db-np5s4\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.724383 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.727792 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pvh5\" (UniqueName: \"kubernetes.io/projected/f1442f3f-48b3-4356-bcb0-773b64ccab8f-kube-api-access-7pvh5\") pod \"apiserver-76f77b778f-wcd6d\" (UID: \"f1442f3f-48b3-4356-bcb0-773b64ccab8f\") " pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.727881 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.227845796 +0000 UTC m=+143.628430440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.731515 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8gghv"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.740307 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.751253 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7kz\" (UniqueName: \"kubernetes.io/projected/a5a47724-9572-4886-a2d9-36a0a56b4b20-kube-api-access-sk7kz\") pod \"dns-default-2shkn\" (UID: \"a5a47724-9572-4886-a2d9-36a0a56b4b20\") " pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.766531 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s4hp\" (UniqueName: \"kubernetes.io/projected/058aae17-e28d-48f3-83c1-9190c9f45a89-kube-api-access-8s4hp\") pod \"ingress-canary-rrn9l\" (UID: \"058aae17-e28d-48f3-83c1-9190c9f45a89\") " pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.793017 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr96j\" (UniqueName: \"kubernetes.io/projected/7ec85074-8a89-495b-a55e-9a05cbaae62f-kube-api-access-zr96j\") pod \"service-ca-9c57cc56f-bz2bz\" (UID: \"7ec85074-8a89-495b-a55e-9a05cbaae62f\") " pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.794241 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.811376 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrvd2\" (UniqueName: \"kubernetes.io/projected/bac6ed8f-e181-484c-861d-36ba4b695bca-kube-api-access-xrvd2\") pod \"csi-hostpathplugin-dd6kd\" (UID: \"bac6ed8f-e181-484c-861d-36ba4b695bca\") " pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.815536 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.818056 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.820398 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ds4g2"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.826775 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.827265 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.327225889 +0000 UTC m=+143.727810633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.836296 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mkwgx"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.858171 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:37:00 crc kubenswrapper[4746]: W0129 16:37:00.879703 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5e34714_dec2_46cf_b5b4_514f66525546.slice/crio-9c4504a46491389ac79b8f6f8eea86617a43c6a8c70573cadaeb97274ba0e687 WatchSource:0}: Error finding container 9c4504a46491389ac79b8f6f8eea86617a43c6a8c70573cadaeb97274ba0e687: Status 404 returned error can't find the container with id 9c4504a46491389ac79b8f6f8eea86617a43c6a8c70573cadaeb97274ba0e687 Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.886955 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.887618 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.899684 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.928933 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:00 crc kubenswrapper[4746]: E0129 16:37:00.929439 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.429415358 +0000 UTC m=+143.830000002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.943181 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.959709 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" Jan 29 16:37:00 crc kubenswrapper[4746]: W0129 16:37:00.960581 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20be07e6_cf06_443d_b49f_f893798034da.slice/crio-414c566ec26d39b294eef4659e4496164725be8c92eed9f5a6b6e4dec5c22686 WatchSource:0}: Error finding container 414c566ec26d39b294eef4659e4496164725be8c92eed9f5a6b6e4dec5c22686: Status 404 returned error can't find the container with id 414c566ec26d39b294eef4659e4496164725be8c92eed9f5a6b6e4dec5c22686 Jan 29 16:37:00 crc kubenswrapper[4746]: W0129 16:37:00.962298 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c7650ca_1e87_4a25_8a8e_dae70ea5719c.slice/crio-a37a84daf565b85c375691e8e3d9d5c50888d42df0e7cd955cfe64cdd194237d WatchSource:0}: Error finding container a37a84daf565b85c375691e8e3d9d5c50888d42df0e7cd955cfe64cdd194237d: Status 404 returned error can't find the container with id a37a84daf565b85c375691e8e3d9d5c50888d42df0e7cd955cfe64cdd194237d Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.976297 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx"] Jan 29 16:37:00 crc kubenswrapper[4746]: I0129 16:37:00.988864 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" Jan 29 16:37:00 crc kubenswrapper[4746]: W0129 16:37:00.989043 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffc36e25_41cc_4f86_b0d6_afb4a49feec6.slice/crio-ede9c4beea9a6f1e2a571737ee9587674743e77029a0b9890c7fd0af78d0a426 WatchSource:0}: Error finding container ede9c4beea9a6f1e2a571737ee9587674743e77029a0b9890c7fd0af78d0a426: Status 404 returned error can't find the container with id ede9c4beea9a6f1e2a571737ee9587674743e77029a0b9890c7fd0af78d0a426 Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.006769 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rrn9l" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.016132 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.033238 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.033536 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.533493552 +0000 UTC m=+143.934078326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.033965 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.034547 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.534535771 +0000 UTC m=+143.935120415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.135320 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.136669 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.636297109 +0000 UTC m=+144.036881753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: W0129 16:37:01.205770 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d373715_9357_4191_bd8d_b87840962375.slice/crio-16dc25b6a67f06332a8103fa6d198be1dfb4452fcd78bf1cf55c380edcde4147 WatchSource:0}: Error finding container 16dc25b6a67f06332a8103fa6d198be1dfb4452fcd78bf1cf55c380edcde4147: Status 404 returned error can't find the container with id 16dc25b6a67f06332a8103fa6d198be1dfb4452fcd78bf1cf55c380edcde4147 Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.218002 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w"] Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.237972 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.238913 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.738897422 +0000 UTC m=+144.139482066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.319059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" event={"ID":"b08d2e95-cdc5-4934-94ec-2cdb56479e29","Type":"ContainerStarted","Data":"2ca45fd2b09bceda4660c848c7e99b26786a1f76df1f65332c375df175c23a6f"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.342018 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.342101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" event={"ID":"20be07e6-cf06-443d-b49f-f893798034da","Type":"ContainerStarted","Data":"414c566ec26d39b294eef4659e4496164725be8c92eed9f5a6b6e4dec5c22686"} Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.343246 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.843219681 +0000 UTC m=+144.243804325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.345989 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" event={"ID":"ffc36e25-41cc-4f86-b0d6-afb4a49feec6","Type":"ContainerStarted","Data":"ede9c4beea9a6f1e2a571737ee9587674743e77029a0b9890c7fd0af78d0a426"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.349299 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ggm4h" event={"ID":"2c7650ca-1e87-4a25-8a8e-dae70ea5719c","Type":"ContainerStarted","Data":"a37a84daf565b85c375691e8e3d9d5c50888d42df0e7cd955cfe64cdd194237d"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.355951 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" event={"ID":"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb","Type":"ContainerStarted","Data":"131ae3ffb2176aa8c2e578ebde65292b45151fbd358beb3156fe943986eae9e2"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.356049 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" event={"ID":"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb","Type":"ContainerStarted","Data":"b34ad19bd8dc265013a739a142f9f69fa312f6ff6ff469db1bacc5b7a1976bcf"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.356710 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.357747 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" event={"ID":"5f12f6a1-8b0c-4e43-bdce-982701cd9478","Type":"ContainerStarted","Data":"5895977936e2868d39843771c7c1619f5825c5fa8afe75295d944bf66e2b52de"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.361740 4746 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8q4kh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.361821 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" podUID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.363358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" event={"ID":"17cd8f72-1a5a-4e40-92b5-0bc669d3002f","Type":"ContainerStarted","Data":"112e7f91020ae26d459f6cf67279ac8050bba21b398c8888d56dd1e34a0cdd3f"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.366883 4746 generic.go:334] "Generic (PLEG): container finished" podID="d459a560-d49c-42c7-afe1-22dc6a872265" containerID="b1e9e2e0b439da576ae857b3002e5b9c4e8e1c9280c34aae5734e4912b447f2d" exitCode=0 Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.367099 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" event={"ID":"d459a560-d49c-42c7-afe1-22dc6a872265","Type":"ContainerDied","Data":"b1e9e2e0b439da576ae857b3002e5b9c4e8e1c9280c34aae5734e4912b447f2d"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.378743 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" event={"ID":"11f6be6c-9a7f-4aae-b850-565c42dd012d","Type":"ContainerStarted","Data":"c143b02925d5b3039745f6a73dc10b2ba0ec888c82db3e2b7d25ee13622bc5bb"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.378791 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" event={"ID":"11f6be6c-9a7f-4aae-b850-565c42dd012d","Type":"ContainerStarted","Data":"81041a01d1e7ef33974935ccfa2c38d3f076c315a149613e5900ee1188a02dd4"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.380534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zkrtb" event={"ID":"5d373715-9357-4191-bd8d-b87840962375","Type":"ContainerStarted","Data":"16dc25b6a67f06332a8103fa6d198be1dfb4452fcd78bf1cf55c380edcde4147"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.382163 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" event={"ID":"4b1e0c98-de4a-4744-b713-4985cfe776b4","Type":"ContainerStarted","Data":"570d208416fffeb233c1ecdbf8023c45648382a42697d50f57c1a2b89b395faa"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.384105 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" event={"ID":"bdc1f9ce-e4b4-492f-b909-d84c33c52543","Type":"ContainerStarted","Data":"68220000f6fd46aecf6996f8eb10353c20ea956471f8778d0ed8efb0888d79ed"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.385349 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.387633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" event={"ID":"6ba9e438-8285-4efb-9125-db88ba0cc4c7","Type":"ContainerStarted","Data":"fc19af7bbe96a862cc1dc1fa84543a370b19fe54e1302ef30d3cee1db9265c35"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.388659 4746 patch_prober.go:28] interesting pod/console-operator-58897d9998-ljsjj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.388714 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" podUID="bdc1f9ce-e4b4-492f-b909-d84c33c52543" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.390708 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" event={"ID":"b660e001-5d85-4ab4-a617-82082e447e2a","Type":"ContainerStarted","Data":"7a34439dc27ba5f5982261165697984b0b2b093bc66a72e54d72a68f69b74e8c"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.390750 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" event={"ID":"b660e001-5d85-4ab4-a617-82082e447e2a","Type":"ContainerStarted","Data":"18ba7f0d2628d3d5c16487a4bfa6f0adc5f025bb30f07eb7622b49d4b6e7dfbb"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.392411 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" event={"ID":"4d4b33d4-6ef0-465d-99be-20a2816090f9","Type":"ContainerStarted","Data":"ad061a8091452b667cdabcb985d630cd4e4719dab9b7ebc53e85aac72f28e8f0"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.397181 4746 generic.go:334] "Generic (PLEG): container finished" podID="a79c9a1a-d4c9-411d-81cb-0a68d4134e53" containerID="7e23b1cd748267b07d4e57b63ccc76fe253623e5f894e99eabfc644ca50c18e9" exitCode=0 Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.397287 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" event={"ID":"a79c9a1a-d4c9-411d-81cb-0a68d4134e53","Type":"ContainerDied","Data":"7e23b1cd748267b07d4e57b63ccc76fe253623e5f894e99eabfc644ca50c18e9"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.397318 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" event={"ID":"a79c9a1a-d4c9-411d-81cb-0a68d4134e53","Type":"ContainerStarted","Data":"637d261b66e3fc10021f1c0e99287e354929b10138502e12a20aeaa606d55d5f"} Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.405906 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" event={"ID":"b5e34714-dec2-46cf-b5b4-514f66525546","Type":"ContainerStarted","Data":"9c4504a46491389ac79b8f6f8eea86617a43c6a8c70573cadaeb97274ba0e687"} Jan 29 16:37:01 crc kubenswrapper[4746]: W0129 16:37:01.422030 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f10316f_35a6_4906_8d11_5bed4a8b9572.slice/crio-c84b8ac3363f0d2cc3fa7b3a4d49384f37b8252df64d20940dadad321d0963e1 WatchSource:0}: Error finding container c84b8ac3363f0d2cc3fa7b3a4d49384f37b8252df64d20940dadad321d0963e1: Status 404 returned error can't find the container with id c84b8ac3363f0d2cc3fa7b3a4d49384f37b8252df64d20940dadad321d0963e1 Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.446177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.453985 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:01.953952731 +0000 UTC m=+144.354537375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.549563 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.550924 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.050905584 +0000 UTC m=+144.451490228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.653291 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.653824 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.153798715 +0000 UTC m=+144.554383359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.666569 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm"] Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.755599 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.756458 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.256420777 +0000 UTC m=+144.657005431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.767559 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" podStartSLOduration=123.767535789 podStartE2EDuration="2m3.767535789s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:01.764890165 +0000 UTC m=+144.165474829" watchObservedRunningTime="2026-01-29 16:37:01.767535789 +0000 UTC m=+144.168120433" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.856993 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.857753 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.357739693 +0000 UTC m=+144.758324337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.917064 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" podStartSLOduration=124.916989962 podStartE2EDuration="2m4.916989962s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:01.916148918 +0000 UTC m=+144.316733562" watchObservedRunningTime="2026-01-29 16:37:01.916989962 +0000 UTC m=+144.317574606" Jan 29 16:37:01 crc kubenswrapper[4746]: I0129 16:37:01.975660 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:01 crc kubenswrapper[4746]: E0129 16:37:01.976586 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.476566659 +0000 UTC m=+144.877151303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.084164 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.084805 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.584785438 +0000 UTC m=+144.985370082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.186846 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.187400 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.687370849 +0000 UTC m=+145.087955493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.187688 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.188092 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.68807758 +0000 UTC m=+145.088662224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.292248 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.292963 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.792930774 +0000 UTC m=+145.193515418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.347891 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vcrws" podStartSLOduration=125.347841962 podStartE2EDuration="2m5.347841962s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:02.324700683 +0000 UTC m=+144.725285327" watchObservedRunningTime="2026-01-29 16:37:02.347841962 +0000 UTC m=+144.748426606" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.397298 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.398422 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:02.898373396 +0000 UTC m=+145.298958040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.505554 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" event={"ID":"4d4b33d4-6ef0-465d-99be-20a2816090f9","Type":"ContainerStarted","Data":"c9ceda1e69461c343967b7a31a00e5c75b42ce5d03b6520fda30e4c6ce167c64"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.510421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.514852 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.014819515 +0000 UTC m=+145.415404159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.523442 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" event={"ID":"8f10316f-35a6-4906-8d11-5bed4a8b9572","Type":"ContainerStarted","Data":"2aa02782eaaa60fb94fc85e6cb392f4f435c7108da83de196de37a3284e17656"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.523498 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" event={"ID":"8f10316f-35a6-4906-8d11-5bed4a8b9572","Type":"ContainerStarted","Data":"c84b8ac3363f0d2cc3fa7b3a4d49384f37b8252df64d20940dadad321d0963e1"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.573863 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" event={"ID":"de2aede4-40e2-47b3-8f78-c28505411b6b","Type":"ContainerStarted","Data":"bad2823d31321d38726f714402b8a98c16f3d9c43b2e4a15338fbd8bce3f14ee"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.577879 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" event={"ID":"b08d2e95-cdc5-4934-94ec-2cdb56479e29","Type":"ContainerStarted","Data":"cd0fb777f8c73716e27dd5e588e1043c3da2b06bb507efc76b0ff490689b828b"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.578987 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.591243 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" event={"ID":"4b1e0c98-de4a-4744-b713-4985cfe776b4","Type":"ContainerStarted","Data":"9ad4675150c89562c05e61bf816500c3ebc2c911aba6ed961f60c4fd79527b70"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.601541 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ggm4h" event={"ID":"2c7650ca-1e87-4a25-8a8e-dae70ea5719c","Type":"ContainerStarted","Data":"05fb42377566284e2e8d49ecc7eb176c88bcc572071fcef73258af4513bc8e08"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.614514 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.614908 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.114893326 +0000 UTC m=+145.515477970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.618819 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.632628 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" event={"ID":"b660e001-5d85-4ab4-a617-82082e447e2a","Type":"ContainerStarted","Data":"55c3cd7b2644a0d7ee0fc1683350e08fad0f967024cbffd98cf4a354b8eb5661"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.637985 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.649145 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.683093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" event={"ID":"5f12f6a1-8b0c-4e43-bdce-982701cd9478","Type":"ContainerStarted","Data":"c5dc919b6ba49348655f956e79fba39dd7aa299d86d83dca71233081586d6cbf"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.684332 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.714446 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" event={"ID":"d459a560-d49c-42c7-afe1-22dc6a872265","Type":"ContainerStarted","Data":"35670f606eda9c77d43a494caed53e62f06aa292568d61ca8466bb1207663d34"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.715819 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.720870 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.720925 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.724579 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t4j2d"] Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.728106 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.228058033 +0000 UTC m=+145.628642677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.737314 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zkrtb" event={"ID":"5d373715-9357-4191-bd8d-b87840962375","Type":"ContainerStarted","Data":"4d37d20fa9416d2c4ef7e007ca5d9d43c3fe1e87a8ac67142bec67e48309c1c2"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.747665 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jhshf" podStartSLOduration=125.747639951 podStartE2EDuration="2m5.747639951s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:02.744261297 +0000 UTC m=+145.144845941" watchObservedRunningTime="2026-01-29 16:37:02.747639951 +0000 UTC m=+145.148224595" Jan 29 16:37:02 crc kubenswrapper[4746]: W0129 16:37:02.772414 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81f1a95c_a6e2_49f5_9adc_6202cb477155.slice/crio-83d829cfafbb627a7d7f759ac65ff96cb31fc785b10fa7e9efe0ba22a7eb799b WatchSource:0}: Error finding container 83d829cfafbb627a7d7f759ac65ff96cb31fc785b10fa7e9efe0ba22a7eb799b: Status 404 returned error can't find the container with id 83d829cfafbb627a7d7f759ac65ff96cb31fc785b10fa7e9efe0ba22a7eb799b Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.773480 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khd9z"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.785791 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" event={"ID":"ffc36e25-41cc-4f86-b0d6-afb4a49feec6","Type":"ContainerStarted","Data":"16d94e586fc9f58259ef5e0235eb9e484b7e41163869cf01183266522f1fa889"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.788458 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.820829 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9v9dn"] Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.833354 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.837678 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.337651791 +0000 UTC m=+145.738236435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.846292 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" event={"ID":"17cd8f72-1a5a-4e40-92b5-0bc669d3002f","Type":"ContainerStarted","Data":"46623b89f2b7e0bdb5d91d95a4fd29474ceba15f5c4871b1ad08e78bcf07f6aa"} Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.846446 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.885773 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-jcn6w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.885887 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" podUID="17cd8f72-1a5a-4e40-92b5-0bc669d3002f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.892243 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v"] Jan 29 16:37:02 crc kubenswrapper[4746]: W0129 16:37:02.900352 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod939b72c6_643d_4e50_8223_7596ca0c5a6a.slice/crio-75802889318b97b80d48e894634cb5cb3c75a988be262e14abd9083168f59187 WatchSource:0}: Error finding container 75802889318b97b80d48e894634cb5cb3c75a988be262e14abd9083168f59187: Status 404 returned error can't find the container with id 75802889318b97b80d48e894634cb5cb3c75a988be262e14abd9083168f59187 Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.934690 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.935089 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.435064388 +0000 UTC m=+145.835649022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.948548 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.961176 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ljsjj" Jan 29 16:37:02 crc kubenswrapper[4746]: E0129 16:37:02.963164 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.463138553 +0000 UTC m=+145.863723197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.966646 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:37:02 crc kubenswrapper[4746]: I0129 16:37:02.968140 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:02.996438 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:02.996928 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.000296 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bz2bz"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.001078 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ggm4h" podStartSLOduration=125.001060125 podStartE2EDuration="2m5.001060125s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:02.962044193 +0000 UTC m=+145.362628837" watchObservedRunningTime="2026-01-29 16:37:03.001060125 +0000 UTC m=+145.401644769" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.012350 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.014761 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2shkn"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.016952 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rrn9l"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.050797 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.054268 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.554235364 +0000 UTC m=+145.954820188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.074540 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dd6kd"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.098832 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" podStartSLOduration=125.098789891 podStartE2EDuration="2m5.098789891s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.039155101 +0000 UTC m=+145.439739755" watchObservedRunningTime="2026-01-29 16:37:03.098789891 +0000 UTC m=+145.499374665" Jan 29 16:37:03 crc kubenswrapper[4746]: W0129 16:37:03.102769 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9d2d668_5a40_44f7_a8bb_5ae390cd9ff1.slice/crio-dbf33e5329dc7829dbf6d9f58d0fe60440a42d3eff8a0520977856c9322dae2b WatchSource:0}: Error finding container dbf33e5329dc7829dbf6d9f58d0fe60440a42d3eff8a0520977856c9322dae2b: Status 404 returned error can't find the container with id dbf33e5329dc7829dbf6d9f58d0fe60440a42d3eff8a0520977856c9322dae2b Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.115348 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-np5s4"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.117601 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.120106 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-wcd6d"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.124865 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9678f"] Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.122252 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-stl4w" podStartSLOduration=125.122164885 podStartE2EDuration="2m5.122164885s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.113023249 +0000 UTC m=+145.513607893" watchObservedRunningTime="2026-01-29 16:37:03.122164885 +0000 UTC m=+145.522749529" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.151556 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8vt65" podStartSLOduration=125.151532497 podStartE2EDuration="2m5.151532497s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.149896631 +0000 UTC m=+145.550481275" watchObservedRunningTime="2026-01-29 16:37:03.151532497 +0000 UTC m=+145.552117151" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.168079 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.168871 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.668851632 +0000 UTC m=+146.069436276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.193646 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:37:03 crc kubenswrapper[4746]: W0129 16:37:03.267853 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3e1b3f9_082c_452a_b27c_b2eb6ca2b999.slice/crio-9221eface88c49f659ac01e19d6a647b255593621c7c4da2258901e5cbc5dcb8 WatchSource:0}: Error finding container 9221eface88c49f659ac01e19d6a647b255593621c7c4da2258901e5cbc5dcb8: Status 404 returned error can't find the container with id 9221eface88c49f659ac01e19d6a647b255593621c7c4da2258901e5cbc5dcb8 Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.275766 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.276333 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.776310669 +0000 UTC m=+146.176895313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.340101 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7jl46" podStartSLOduration=126.340078575 podStartE2EDuration="2m6.340078575s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.318802239 +0000 UTC m=+145.719386903" watchObservedRunningTime="2026-01-29 16:37:03.340078575 +0000 UTC m=+145.740663219" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.378145 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.379103 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.879087597 +0000 UTC m=+146.279672241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.409323 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" podStartSLOduration=125.409290782 podStartE2EDuration="2m5.409290782s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.398942742 +0000 UTC m=+145.799527386" watchObservedRunningTime="2026-01-29 16:37:03.409290782 +0000 UTC m=+145.809875426" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.480079 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.483627 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" podStartSLOduration=126.483604862 podStartE2EDuration="2m6.483604862s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.480553306 +0000 UTC m=+145.881137960" watchObservedRunningTime="2026-01-29 16:37:03.483604862 +0000 UTC m=+145.884189496" Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.499984 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:03.999920529 +0000 UTC m=+146.400505193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.520874 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" podStartSLOduration=125.520819723 podStartE2EDuration="2m5.520819723s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.519759084 +0000 UTC m=+145.920343748" watchObservedRunningTime="2026-01-29 16:37:03.520819723 +0000 UTC m=+145.921404367" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.551717 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.565948 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:03 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:03 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:03 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.566005 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.603401 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.603873 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.103857598 +0000 UTC m=+146.504442242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.643364 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zkrtb" podStartSLOduration=6.643339093 podStartE2EDuration="6.643339093s" podCreationTimestamp="2026-01-29 16:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:03.642702015 +0000 UTC m=+146.043286659" watchObservedRunningTime="2026-01-29 16:37:03.643339093 +0000 UTC m=+146.043923737" Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.708107 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.708646 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.20862341 +0000 UTC m=+146.609208054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.817406 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.817960 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.317935109 +0000 UTC m=+146.718519743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.919867 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:03 crc kubenswrapper[4746]: E0129 16:37:03.920265 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.420245084 +0000 UTC m=+146.820829728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:03 crc kubenswrapper[4746]: I0129 16:37:03.993549 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" event={"ID":"608c383e-45e1-43dd-b8ad-9a7499953754","Type":"ContainerStarted","Data":"f19df6ebdb087666650de4154adaa72cee0da99a2ec72c8d83851a0dc7ec301c"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.010869 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" event={"ID":"5f12f6a1-8b0c-4e43-bdce-982701cd9478","Type":"ContainerStarted","Data":"4b8b179ec80b30031f679930ee096580b6078d5353fb01baf1d03b3ef1d7fee6"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.030252 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.030704 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.530686864 +0000 UTC m=+146.931271518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.052353 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-np5s4" event={"ID":"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999","Type":"ContainerStarted","Data":"9221eface88c49f659ac01e19d6a647b255593621c7c4da2258901e5cbc5dcb8"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.080443 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6x98j" podStartSLOduration=127.080418467 podStartE2EDuration="2m7.080418467s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.079967954 +0000 UTC m=+146.480552598" watchObservedRunningTime="2026-01-29 16:37:04.080418467 +0000 UTC m=+146.481003111" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.093578 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" event={"ID":"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3","Type":"ContainerStarted","Data":"9887e9ab4aaf433d3865745a5eac582f67fd3d3e2e95ac435831f062581f06ca"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.093638 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" event={"ID":"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3","Type":"ContainerStarted","Data":"cbc8f9fbef3e68742da2dc4684154ab4a3eb55c8c11026a64744e69ef82c48d5"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.131996 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.132597 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.632562897 +0000 UTC m=+147.033147541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.145022 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" event={"ID":"4b1e0c98-de4a-4744-b713-4985cfe776b4","Type":"ContainerStarted","Data":"03f7f8bf76b717bb9064c902e5b455a30f99b660b6276e54ee94fdffa4291fc9"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.158996 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" event={"ID":"e37f58d7-0e3a-4873-b381-e81be85e8f3f","Type":"ContainerStarted","Data":"be898a7c546b50de578022633df5050880b07ba963511e509ed4637c560b143a"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.159071 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" event={"ID":"e37f58d7-0e3a-4873-b381-e81be85e8f3f","Type":"ContainerStarted","Data":"306209e97f40b502ff557c020cfe6d5fbdc371a7af3b21b42ec10d6aeda95329"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.176671 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" event={"ID":"81f1a95c-a6e2-49f5-9adc-6202cb477155","Type":"ContainerStarted","Data":"ca828ff7b0fe69701c070ab6d3d793465e3c92e12b667e058188a7e8d3a06497"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.176744 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" event={"ID":"81f1a95c-a6e2-49f5-9adc-6202cb477155","Type":"ContainerStarted","Data":"83d829cfafbb627a7d7f759ac65ff96cb31fc785b10fa7e9efe0ba22a7eb799b"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.178653 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4b9fx" podStartSLOduration=126.178629956 podStartE2EDuration="2m6.178629956s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.176737373 +0000 UTC m=+146.577322037" watchObservedRunningTime="2026-01-29 16:37:04.178629956 +0000 UTC m=+146.579214600" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.179937 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" event={"ID":"f1442f3f-48b3-4356-bcb0-773b64ccab8f","Type":"ContainerStarted","Data":"4053b55aecd1364e9293f99f194a8b359e6e5582bb500cd3488ac4ccae599aa9"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.225789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" event={"ID":"ed2a4b0e-c66b-45d9-abe6-32cb1481062c","Type":"ContainerStarted","Data":"6767fd5f9d9f00427c83d34adf7d3305b1c73cca4dc652bf89ac9a87179e8989"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.225892 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" event={"ID":"ed2a4b0e-c66b-45d9-abe6-32cb1481062c","Type":"ContainerStarted","Data":"bbba0cd54f2c11718068aa041bcbd56c511e7e1f995560f1826bd56e7aed20d9"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.227566 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.261291 4746 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-kwf6d container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.261812 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" podUID="ed2a4b0e-c66b-45d9-abe6-32cb1481062c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.262607 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.265546 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.765525768 +0000 UTC m=+147.166110412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.283123 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-pfbjh" podStartSLOduration=126.283092869 podStartE2EDuration="2m6.283092869s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.213520683 +0000 UTC m=+146.614105327" watchObservedRunningTime="2026-01-29 16:37:04.283092869 +0000 UTC m=+146.683677513" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.284835 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" event={"ID":"de2aede4-40e2-47b3-8f78-c28505411b6b","Type":"ContainerStarted","Data":"8eb809c14060652c4108f9881817825513176e1d9d51cb96aa4872282f97876c"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.316152 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" podStartSLOduration=126.316130434 podStartE2EDuration="2m6.316130434s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.282086521 +0000 UTC m=+146.682671165" watchObservedRunningTime="2026-01-29 16:37:04.316130434 +0000 UTC m=+146.716715078" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.337745 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" event={"ID":"b5e34714-dec2-46cf-b5b4-514f66525546","Type":"ContainerStarted","Data":"26a7cf22ce4df9c1b11b9c61ac4ca852e3292e2c9e29b1adc0d19773511694e2"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.337791 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" event={"ID":"b5e34714-dec2-46cf-b5b4-514f66525546","Type":"ContainerStarted","Data":"316c740c5fd05902dd206473e194c52f137f3c4372747f99437c0ef16905169b"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.368813 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.369502 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.869441296 +0000 UTC m=+147.270025940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.369579 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8gghv" podStartSLOduration=126.36955828 podStartE2EDuration="2m6.36955828s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.368962643 +0000 UTC m=+146.769547287" watchObservedRunningTime="2026-01-29 16:37:04.36955828 +0000 UTC m=+146.770142924" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.370394 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-695pm" podStartSLOduration=126.370386964 podStartE2EDuration="2m6.370386964s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.318441469 +0000 UTC m=+146.719026113" watchObservedRunningTime="2026-01-29 16:37:04.370386964 +0000 UTC m=+146.770971618" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.419248 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rrn9l" event={"ID":"058aae17-e28d-48f3-83c1-9190c9f45a89","Type":"ContainerStarted","Data":"b0c2f5a7b1779060c8498f3faf784968e93382afba8999883a27fe7a11d737aa"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.471287 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.472118 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:04.97209812 +0000 UTC m=+147.372682824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.517822 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" event={"ID":"b4842de2-18f5-4f78-813f-6cbcb7b1b740","Type":"ContainerStarted","Data":"dfc626df731a3c0bb5d2e10d9ea9f25d1f6311c3bda2659a44234814d0ef0f47"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.518299 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" event={"ID":"b4842de2-18f5-4f78-813f-6cbcb7b1b740","Type":"ContainerStarted","Data":"e82cb6c1eba638b5cea2d72ba0bf176f52f4f16d5d6a70b342abe0f53b754ede"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.519570 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" event={"ID":"cad57b29-4969-46ed-a38d-479fa8848fa9","Type":"ContainerStarted","Data":"f96c4e530f422a0902dd67b85531dca7340e18306adf5aef8259c8b38ab82277"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.549974 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lq59b" podStartSLOduration=126.549947089 podStartE2EDuration="2m6.549947089s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.546878954 +0000 UTC m=+146.947463618" watchObservedRunningTime="2026-01-29 16:37:04.549947089 +0000 UTC m=+146.950531733" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.560975 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:04 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:04 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:04 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.561028 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.575929 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.579015 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.078987922 +0000 UTC m=+147.479572616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.587433 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" event={"ID":"7becf4a7-7ad1-4d20-9707-a28330253dfd","Type":"ContainerStarted","Data":"9626dfc2ef7c51c0cd106bec9ed1ac46b8671a799fc62b018dafa5655841f22c"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.619691 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" podStartSLOduration=126.619661771 podStartE2EDuration="2m6.619661771s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.608430687 +0000 UTC m=+147.009015341" watchObservedRunningTime="2026-01-29 16:37:04.619661771 +0000 UTC m=+147.020246415" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.699510 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" event={"ID":"8f3741d9-db6d-4387-874f-2cf7b81fb737","Type":"ContainerStarted","Data":"9c8883792c6bc28afd72f3adff830e7eeb4a768fb1f0c6678042520b652411db"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.706589 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.707389 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.207366736 +0000 UTC m=+147.607951380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.733542 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" event={"ID":"a79c9a1a-d4c9-411d-81cb-0a68d4134e53","Type":"ContainerStarted","Data":"3e6be6f5ae055efcd26ab08784c9ae8cc305d58af7ecbea12779df8a8a333cfc"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.768176 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2shkn" event={"ID":"a5a47724-9572-4886-a2d9-36a0a56b4b20","Type":"ContainerStarted","Data":"2d9a462af363149d925653badf45e2be9dc19986c82871b9b5737a4ce069081e"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.811145 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.812821 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.312801647 +0000 UTC m=+147.713386291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.840243 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" event={"ID":"7ec85074-8a89-495b-a55e-9a05cbaae62f","Type":"ContainerStarted","Data":"31f3021d0093f3e788f37318660c434bf7bf56e9ae4436f1c78e9d05af1480c5"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.862397 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" event={"ID":"9e56505d-05bd-4223-a84d-4622ce4267ee","Type":"ContainerStarted","Data":"e8dd20204ce388abaeee33041e4f2d1a03e7b9b6b100e74aafd9227a253d2dd6"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.907408 4746 csr.go:261] certificate signing request csr-mczzz is approved, waiting to be issued Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.910604 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" podStartSLOduration=126.910531012 podStartE2EDuration="2m6.910531012s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:04.905329077 +0000 UTC m=+147.305913721" watchObservedRunningTime="2026-01-29 16:37:04.910531012 +0000 UTC m=+147.311115656" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.913597 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" event={"ID":"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4","Type":"ContainerStarted","Data":"45159744c96c20e67f181959c0acac4fb7198ffc4f470e60c0a4ca2f16a817e3"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.914829 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:04 crc kubenswrapper[4746]: E0129 16:37:04.915395 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.415375007 +0000 UTC m=+147.815959651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.919120 4746 csr.go:257] certificate signing request csr-mczzz is issued Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.923025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" event={"ID":"b1d40aef-51e1-48d1-ac44-5ca93dd7b612","Type":"ContainerStarted","Data":"0fba3a2cb1a435391e00347b5e2be510daea691af80b5b0af2e16ad3bb3abe78"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.943207 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" event={"ID":"33f6daff-2886-42dd-95ed-9aeb6aad3ec0","Type":"ContainerStarted","Data":"3af409a87d7cbe1685870fb5cba8d20d6b1d2c86b3b4d1425964734e9f234f7b"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.953639 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" event={"ID":"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1","Type":"ContainerStarted","Data":"dbf33e5329dc7829dbf6d9f58d0fe60440a42d3eff8a0520977856c9322dae2b"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.964748 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.965619 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.966699 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" event={"ID":"20be07e6-cf06-443d-b49f-f893798034da","Type":"ContainerStarted","Data":"79057f90e36c9eee3a50c529d7178506002aa4b00bbd70b653a195a0e1f7bb04"} Jan 29 16:37:04 crc kubenswrapper[4746]: I0129 16:37:04.994462 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.014701 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" event={"ID":"6ba9e438-8285-4efb-9125-db88ba0cc4c7","Type":"ContainerStarted","Data":"2cf0ff3f7096e63989b0bd95e5323cedcccc8b7b8dbaa43280f071dbb8982a23"} Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.014760 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" event={"ID":"6ba9e438-8285-4efb-9125-db88ba0cc4c7","Type":"ContainerStarted","Data":"2472bdf82d58ff611317642af37181ec0bd699e52a1ed2ed82b67e102d49be80"} Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.020798 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.021397 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.521373805 +0000 UTC m=+147.921958449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.041276 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9v9dn" event={"ID":"939b72c6-643d-4e50-8223-7596ca0c5a6a","Type":"ContainerStarted","Data":"75802889318b97b80d48e894634cb5cb3c75a988be262e14abd9083168f59187"} Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.041966 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.043146 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.048198 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" event={"ID":"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c","Type":"ContainerStarted","Data":"a3261280ceecc4b05777b05ab47f5b29319694510a994c0cdf6df1e94de800d8"} Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.043183 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.090475 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ds4g2" podStartSLOduration=127.090451018 podStartE2EDuration="2m7.090451018s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:05.009149043 +0000 UTC m=+147.409733697" watchObservedRunningTime="2026-01-29 16:37:05.090451018 +0000 UTC m=+147.491035652" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.128113 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.129965 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.629950634 +0000 UTC m=+148.030535278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.135128 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" event={"ID":"bac6ed8f-e181-484c-861d-36ba4b695bca","Type":"ContainerStarted","Data":"3ee417be08a29e4672026c056f7cbb07d442ab566168945e476ccff13205195b"} Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.155490 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwrzg" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.204765 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9v9dn" podStartSLOduration=128.204744527 podStartE2EDuration="2m8.204744527s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:05.174726217 +0000 UTC m=+147.575310861" watchObservedRunningTime="2026-01-29 16:37:05.204744527 +0000 UTC m=+147.605329171" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.205016 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mkwgx" podStartSLOduration=127.205009815 podStartE2EDuration="2m7.205009815s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:05.202799243 +0000 UTC m=+147.603383907" watchObservedRunningTime="2026-01-29 16:37:05.205009815 +0000 UTC m=+147.605594459" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.231227 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.235467 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.735435756 +0000 UTC m=+148.136020400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.336053 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.336833 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.836816114 +0000 UTC m=+148.237400748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.440599 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.440963 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:05.940945149 +0000 UTC m=+148.341529793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.542617 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:05 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:05 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:05 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.542705 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.542792 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.543329 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.043313333 +0000 UTC m=+148.443897977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.644387 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.644927 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.144887897 +0000 UTC m=+148.545472571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.710635 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-jcn6w" Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.747518 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.748153 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.248125386 +0000 UTC m=+148.648710030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.849465 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.849625 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.349586837 +0000 UTC m=+148.750171471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.849762 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.850237 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.350218674 +0000 UTC m=+148.750803318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.941787 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 16:32:04 +0000 UTC, rotation deadline is 2026-12-04 22:42:30.507781693 +0000 UTC Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.941841 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7422h5m24.565943944s for next certificate rotation Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.950520 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.950706 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.450671406 +0000 UTC m=+148.851256050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:05 crc kubenswrapper[4746]: I0129 16:37:05.950874 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:05 crc kubenswrapper[4746]: E0129 16:37:05.951273 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.451265593 +0000 UTC m=+148.851850237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.052700 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.052998 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.552963649 +0000 UTC m=+148.953548293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.053450 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.053898 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.553878995 +0000 UTC m=+148.954463639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.141793 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" event={"ID":"33f6daff-2886-42dd-95ed-9aeb6aad3ec0","Type":"ContainerStarted","Data":"88b494cbafc39d1b3de31cb7a353c6cbff147a5418b832ed8a095e6e68b963fb"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.141849 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" event={"ID":"33f6daff-2886-42dd-95ed-9aeb6aad3ec0","Type":"ContainerStarted","Data":"4d87260a97c530b6b84177ef3b8123fc7e41006acfd200b625e0e5f25648e202"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.141993 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.143975 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" event={"ID":"608c383e-45e1-43dd-b8ad-9a7499953754","Type":"ContainerStarted","Data":"a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.144231 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.145728 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" event={"ID":"d9d2d668-5a40-44f7-a8bb-5ae390cd9ff1","Type":"ContainerStarted","Data":"77ac2520bae3e59c80b25b6a1022935ad0b3783c50903fe910a5addfd6e33260"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.146680 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-khd9z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.146733 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.147587 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bz2bz" event={"ID":"7ec85074-8a89-495b-a55e-9a05cbaae62f","Type":"ContainerStarted","Data":"90e371be4e61625515713b641a6661b804c3254898ab8fe6f35461cfcebd2722"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.149778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8nlbc" event={"ID":"cad57b29-4969-46ed-a38d-479fa8848fa9","Type":"ContainerStarted","Data":"722f90aaea1a46cb31a7893f4e0ff2e108aa11cf4b588ed121d3790f90f961de"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.151424 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" event={"ID":"7becf4a7-7ad1-4d20-9707-a28330253dfd","Type":"ContainerStarted","Data":"56945b9f9905328c80010a14bdf4394e3457f2b49f68f700d7bdb410ad10b2e1"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.153357 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" event={"ID":"e37f58d7-0e3a-4873-b381-e81be85e8f3f","Type":"ContainerStarted","Data":"8546c97d99093567212661fc774584c404da4bfb7de308ad6351d1a05ac4cd4b"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.154091 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.154223 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.654201102 +0000 UTC m=+149.054785746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.154391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.154760 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.654752319 +0000 UTC m=+149.055336963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.155239 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rrn9l" event={"ID":"058aae17-e28d-48f3-83c1-9190c9f45a89","Type":"ContainerStarted","Data":"6d300e6ee344894e94363a533fc2c0339a0561f925ad1b02e723d056fe22f228"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.157403 4746 generic.go:334] "Generic (PLEG): container finished" podID="f1442f3f-48b3-4356-bcb0-773b64ccab8f" containerID="12fa7d6109fc997ae0eb4035e71727f61427d9fc42bcacfc6249f4e975264d71" exitCode=0 Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.157474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" event={"ID":"f1442f3f-48b3-4356-bcb0-773b64ccab8f","Type":"ContainerDied","Data":"12fa7d6109fc997ae0eb4035e71727f61427d9fc42bcacfc6249f4e975264d71"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.160319 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" event={"ID":"74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4","Type":"ContainerStarted","Data":"eef2afe28104b2087bdee567b133bb8c1e45ecd5d9cae1dd158d41f42c906bec"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.160532 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.162323 4746 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bc9l5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.162393 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" podUID="74f6d2e0-a6dc-4cf3-b1d9-8a9a8ac0e5c4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.163037 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" event={"ID":"3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3","Type":"ContainerStarted","Data":"e9060e159a69d4e59ef0eb22727d18b3fb89e2ab721b8cae6b8a204c7f6c6a2f"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.164769 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" event={"ID":"9e56505d-05bd-4223-a84d-4622ce4267ee","Type":"ContainerStarted","Data":"6ade74beead1304efcba1cc838d93b7545569ffb73d031ce56143ed61c7b7079"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.165395 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.166466 4746 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9678f container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.166501 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.167168 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" event={"ID":"b1d40aef-51e1-48d1-ac44-5ca93dd7b612","Type":"ContainerStarted","Data":"c12b94a47d2f9fa0932f891efbcca64e7b1ea7b2f57db718138f4d255bd94ef5"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.168877 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" event={"ID":"8f3741d9-db6d-4387-874f-2cf7b81fb737","Type":"ContainerStarted","Data":"574a9e1ed00d1890000bc4b2b381852c7885c1f3e690cc2149e093f31336c2ed"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.168978 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" event={"ID":"8f3741d9-db6d-4387-874f-2cf7b81fb737","Type":"ContainerStarted","Data":"5a8a873e3e6d73625b660f5d95752d6df628e3949c864a9c9123e0017c6bf7e4"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.175618 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" event={"ID":"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c","Type":"ContainerStarted","Data":"d39f8166e6129267fd406177b4d50536d91876be40cb12f13519d712ed9c318f"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.175736 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" event={"ID":"97f60b8a-04dd-42eb-8f0f-8b7e001fdf9c","Type":"ContainerStarted","Data":"c8cc30ecb69576c615ab8ee320b38eb8d8256d12d49f461aadba0016e822f0da"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.180024 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" podStartSLOduration=128.180004696 podStartE2EDuration="2m8.180004696s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.179154211 +0000 UTC m=+148.579738855" watchObservedRunningTime="2026-01-29 16:37:06.180004696 +0000 UTC m=+148.580589340" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.183312 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" event={"ID":"bac6ed8f-e181-484c-861d-36ba4b695bca","Type":"ContainerStarted","Data":"beb67298f919d1f5466206ff9dc7f29947f890d0c951d6bd0bef3f307eb4bd18"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.188244 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2shkn" event={"ID":"a5a47724-9572-4886-a2d9-36a0a56b4b20","Type":"ContainerStarted","Data":"a4fbbe3558245e1ca39eab6c524423769c66d79800a17e14f45a1d546a18299e"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.188296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2shkn" event={"ID":"a5a47724-9572-4886-a2d9-36a0a56b4b20","Type":"ContainerStarted","Data":"f4da039dee76f7ddad82e73bd95f3edbcea9e9d1373206a933f9db23a0b38312"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.188944 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.190214 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9v9dn" event={"ID":"939b72c6-643d-4e50-8223-7596ca0c5a6a","Type":"ContainerStarted","Data":"cdfdb7ef63b4df260697ddbf2748c93c5cfc25fa83ea4dcef0ec7bdc5407eaba"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.190997 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.191040 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.194012 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-np5s4" event={"ID":"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999","Type":"ContainerStarted","Data":"2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16"} Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.229421 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pfgqt" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.249406 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kwf6d" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.255155 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.258325 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.758297387 +0000 UTC m=+149.158882071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.291768 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" podStartSLOduration=128.291741142 podStartE2EDuration="2m8.291741142s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.233618936 +0000 UTC m=+148.634203580" watchObservedRunningTime="2026-01-29 16:37:06.291741142 +0000 UTC m=+148.692325786" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.349037 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" podStartSLOduration=129.349013606 podStartE2EDuration="2m9.349013606s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.348301936 +0000 UTC m=+148.748886590" watchObservedRunningTime="2026-01-29 16:37:06.349013606 +0000 UTC m=+148.749598250" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.349865 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wts2v" podStartSLOduration=128.349858729 podStartE2EDuration="2m8.349858729s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.294311144 +0000 UTC m=+148.694895788" watchObservedRunningTime="2026-01-29 16:37:06.349858729 +0000 UTC m=+148.750443373" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.360645 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.360759 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.361171 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.861156375 +0000 UTC m=+149.261741019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.387289 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.462119 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.462488 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.462532 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.462564 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.465344 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:06.965318831 +0000 UTC m=+149.365903485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.468781 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.470091 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.482835 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5h72h" podStartSLOduration=128.482811471 podStartE2EDuration="2m8.482811471s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.414422736 +0000 UTC m=+148.815007380" watchObservedRunningTime="2026-01-29 16:37:06.482811471 +0000 UTC m=+148.883396115" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.483046 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.490825 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" podStartSLOduration=129.490806805 podStartE2EDuration="2m9.490806805s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.487135922 +0000 UTC m=+148.887720576" watchObservedRunningTime="2026-01-29 16:37:06.490806805 +0000 UTC m=+148.891391449" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.533490 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:06 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:06 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:06 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.533569 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.551699 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rrn9l" podStartSLOduration=9.551672708 podStartE2EDuration="9.551672708s" podCreationTimestamp="2026-01-29 16:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.523220422 +0000 UTC m=+148.923805066" watchObservedRunningTime="2026-01-29 16:37:06.551672708 +0000 UTC m=+148.952257352" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.576787 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.578310 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.078286433 +0000 UTC m=+149.478871077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.578868 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.604068 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vp4mg" podStartSLOduration=128.604043254 podStartE2EDuration="2m8.604043254s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.564911599 +0000 UTC m=+148.965496243" watchObservedRunningTime="2026-01-29 16:37:06.604043254 +0000 UTC m=+149.004627898" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.604518 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6fjzf" podStartSLOduration=128.604512267 podStartE2EDuration="2m8.604512267s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.603489209 +0000 UTC m=+149.004073853" watchObservedRunningTime="2026-01-29 16:37:06.604512267 +0000 UTC m=+149.005096921" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.651591 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gkxmb" podStartSLOduration=128.651556734 podStartE2EDuration="2m8.651556734s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.64464101 +0000 UTC m=+149.045225644" watchObservedRunningTime="2026-01-29 16:37:06.651556734 +0000 UTC m=+149.052141378" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.669620 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-t4j2d" podStartSLOduration=128.669597239 podStartE2EDuration="2m8.669597239s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.667096219 +0000 UTC m=+149.067680863" watchObservedRunningTime="2026-01-29 16:37:06.669597239 +0000 UTC m=+149.070181883" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.685385 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.685794 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.185775152 +0000 UTC m=+149.586359796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.685907 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.686309 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.186302106 +0000 UTC m=+149.586886750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.686514 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.704020 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.784920 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" podStartSLOduration=128.784900957 podStartE2EDuration="2m8.784900957s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.784036382 +0000 UTC m=+149.184621026" watchObservedRunningTime="2026-01-29 16:37:06.784900957 +0000 UTC m=+149.185485601" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.787837 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.788331 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.288311722 +0000 UTC m=+149.688896366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.860240 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2shkn" podStartSLOduration=9.860214884 podStartE2EDuration="9.860214884s" podCreationTimestamp="2026-01-29 16:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.858668061 +0000 UTC m=+149.259252705" watchObservedRunningTime="2026-01-29 16:37:06.860214884 +0000 UTC m=+149.260799518" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.861896 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-np5s4" podStartSLOduration=129.861890221 podStartE2EDuration="2m9.861890221s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:06.821100469 +0000 UTC m=+149.221685113" watchObservedRunningTime="2026-01-29 16:37:06.861890221 +0000 UTC m=+149.262474865" Jan 29 16:37:06 crc kubenswrapper[4746]: I0129 16:37:06.892201 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:06 crc kubenswrapper[4746]: E0129 16:37:06.892626 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.392613611 +0000 UTC m=+149.793198255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.005001 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.005180 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.505156241 +0000 UTC m=+149.905740885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.005357 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.005766 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.505757678 +0000 UTC m=+149.906342322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.108819 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.109316 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.609295196 +0000 UTC m=+150.009879840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.224137 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.224970 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.724942184 +0000 UTC m=+150.125526828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.299134 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" event={"ID":"f1442f3f-48b3-4356-bcb0-773b64ccab8f","Type":"ContainerStarted","Data":"46ed9cc7b028fe3644d150f58a751401547275f8e44fc314b0066608d2a654fe"} Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.305321 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-khd9z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.305386 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.305749 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.305830 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.306534 4746 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9678f container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.306572 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.323145 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bc9l5" Jan 29 16:37:07 crc kubenswrapper[4746]: W0129 16:37:07.323902 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-86a24b5b4ff80d4c3bf44f5ecb7dc9101d59dfd132a6ae77ae70b1028e1ade48 WatchSource:0}: Error finding container 86a24b5b4ff80d4c3bf44f5ecb7dc9101d59dfd132a6ae77ae70b1028e1ade48: Status 404 returned error can't find the container with id 86a24b5b4ff80d4c3bf44f5ecb7dc9101d59dfd132a6ae77ae70b1028e1ade48 Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.325881 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.326395 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.826377773 +0000 UTC m=+150.226962417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.428848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.434059 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:07.934038626 +0000 UTC m=+150.334623270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.531115 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.531434 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.031398331 +0000 UTC m=+150.431982975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.532005 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.532537 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.032514222 +0000 UTC m=+150.433099056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.538809 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:07 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:07 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:07 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.538872 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.633298 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.633787 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.133730616 +0000 UTC m=+150.534315360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.633865 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.634374 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.134351873 +0000 UTC m=+150.534936517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.735321 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.735542 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.235483774 +0000 UTC m=+150.636068428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.735664 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.736079 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.23606419 +0000 UTC m=+150.636648834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.837556 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.837818 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.337778597 +0000 UTC m=+150.738363241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.838021 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.838499 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.338487847 +0000 UTC m=+150.739072701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.882037 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.883076 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.885336 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.889108 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.934070 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.939148 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.439117974 +0000 UTC m=+150.839702618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.939021 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:07 crc kubenswrapper[4746]: I0129 16:37:07.939564 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:07 crc kubenswrapper[4746]: E0129 16:37:07.939928 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.439918596 +0000 UTC m=+150.840503240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.041390 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.041667 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.041770 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.041911 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.54189032 +0000 UTC m=+150.942474964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.143042 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.143575 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.143925 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.643909506 +0000 UTC m=+151.044494150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.144144 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.144247 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.175271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.199060 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.245982 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.246147 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.746119846 +0000 UTC m=+151.146704490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.246455 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.246819 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.746806625 +0000 UTC m=+151.147391269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.311177 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fc232ea3dee7ac7f4ac9d478162c77141da431f6b5fd6cbec4d08c8a0c56f1b1"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.311673 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"86a24b5b4ff80d4c3bf44f5ecb7dc9101d59dfd132a6ae77ae70b1028e1ade48"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.315882 4746 generic.go:334] "Generic (PLEG): container finished" podID="7becf4a7-7ad1-4d20-9707-a28330253dfd" containerID="56945b9f9905328c80010a14bdf4394e3457f2b49f68f700d7bdb410ad10b2e1" exitCode=0 Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.316041 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" event={"ID":"7becf4a7-7ad1-4d20-9707-a28330253dfd","Type":"ContainerDied","Data":"56945b9f9905328c80010a14bdf4394e3457f2b49f68f700d7bdb410ad10b2e1"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.319391 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"042323763ececfd2fdb9705e8de2b12d008924d01a91abb8deb900e9ce333175"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.319453 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d335f27ad561ace4459113a5649aec9195875b4bbc5f3253b0b0231a13852cb4"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.326103 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" event={"ID":"f1442f3f-48b3-4356-bcb0-773b64ccab8f","Type":"ContainerStarted","Data":"94816eb72de42616c162e89977cbaa217c26033d38105842de36f407e552ce01"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.327730 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a3456eed3288a34369dec6b853cc8ed23b1c4f5db5fe941cd4ae4cf3630a9bd9"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.327775 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fd30feb30435c2fd2567aff17f2cec0aed5770c76db9da9de9436268744ee240"} Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.347367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.348046 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.848024849 +0000 UTC m=+151.248609493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.391948 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" podStartSLOduration=131.391925368 podStartE2EDuration="2m11.391925368s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:08.391633519 +0000 UTC m=+150.792218173" watchObservedRunningTime="2026-01-29 16:37:08.391925368 +0000 UTC m=+150.792510012" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.448882 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.450873 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:08.950852827 +0000 UTC m=+151.351437471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.463573 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:37:08 crc kubenswrapper[4746]: W0129 16:37:08.477221 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5b9e90fc_fdbd_4047_9c64_03e12afb14a4.slice/crio-ebdcb0d61538effdc12c6decdf9755d7dffa059adfd6ccab1dfc581f12a5d031 WatchSource:0}: Error finding container ebdcb0d61538effdc12c6decdf9755d7dffa059adfd6ccab1dfc581f12a5d031: Status 404 returned error can't find the container with id ebdcb0d61538effdc12c6decdf9755d7dffa059adfd6ccab1dfc581f12a5d031 Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.498364 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.538432 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:08 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:08 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:08 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.538508 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.550771 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.550962 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.050938229 +0000 UTC m=+151.451522873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.551066 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.551531 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.051516365 +0000 UTC m=+151.452101009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.607072 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tqxz6"] Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.609965 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.616589 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.622380 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tqxz6"] Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.651936 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.652445 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.152423649 +0000 UTC m=+151.553008293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.755004 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.755080 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56q9d\" (UniqueName: \"kubernetes.io/projected/d1bf7638-7d83-4b72-addf-51bae49b7390-kube-api-access-56q9d\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.755124 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-utilities\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.755175 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-catalog-content\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.755563 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.255547866 +0000 UTC m=+151.656132510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.778257 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8w7wb"] Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.779538 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.783426 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.793843 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8w7wb"] Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.855913 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.856095 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.356068199 +0000 UTC m=+151.756652843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.856650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-catalog-content\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.856822 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhhr\" (UniqueName: \"kubernetes.io/projected/8b74b912-b845-497d-8566-6975dc1fdce5-kube-api-access-6vhhr\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.856934 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.857030 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-utilities\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.857240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-catalog-content\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.857326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56q9d\" (UniqueName: \"kubernetes.io/projected/d1bf7638-7d83-4b72-addf-51bae49b7390-kube-api-access-56q9d\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.857445 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-utilities\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.857566 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.357460808 +0000 UTC m=+151.758045562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.857659 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-catalog-content\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.857817 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-utilities\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.885638 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56q9d\" (UniqueName: \"kubernetes.io/projected/d1bf7638-7d83-4b72-addf-51bae49b7390-kube-api-access-56q9d\") pod \"certified-operators-tqxz6\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.959482 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.959813 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-catalog-content\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.959935 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.459899336 +0000 UTC m=+151.860483970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.960156 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhhr\" (UniqueName: \"kubernetes.io/projected/8b74b912-b845-497d-8566-6975dc1fdce5-kube-api-access-6vhhr\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.960312 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.960350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-utilities\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.960372 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-catalog-content\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.960811 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-utilities\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:08 crc kubenswrapper[4746]: E0129 16:37:08.960834 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.460817391 +0000 UTC m=+151.861402035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.982469 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rxvtt"] Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.984038 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.987164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:37:08 crc kubenswrapper[4746]: I0129 16:37:08.994962 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhhr\" (UniqueName: \"kubernetes.io/projected/8b74b912-b845-497d-8566-6975dc1fdce5-kube-api-access-6vhhr\") pod \"community-operators-8w7wb\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:08.999691 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxvtt"] Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.061608 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.061948 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-catalog-content\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.062023 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-utilities\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.062048 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfh72\" (UniqueName: \"kubernetes.io/projected/9d23c8d2-30ea-475c-b864-7d293459d078-kube-api-access-nfh72\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.062284 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.56226616 +0000 UTC m=+151.962850804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.098464 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.165666 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.165732 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-catalog-content\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.165794 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-utilities\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.165818 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfh72\" (UniqueName: \"kubernetes.io/projected/9d23c8d2-30ea-475c-b864-7d293459d078-kube-api-access-nfh72\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.166164 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.666137329 +0000 UTC m=+152.066721973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.166338 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-utilities\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.169096 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6mf4w"] Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.169423 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-catalog-content\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.170344 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.188460 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mf4w"] Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.208990 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfh72\" (UniqueName: \"kubernetes.io/projected/9d23c8d2-30ea-475c-b864-7d293459d078-kube-api-access-nfh72\") pod \"certified-operators-rxvtt\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.267949 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.268232 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhz8\" (UniqueName: \"kubernetes.io/projected/465b9d5a-3865-4d08-8528-f26c88c00198-kube-api-access-bnhz8\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.268296 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-utilities\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.268330 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-catalog-content\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.268476 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.768454282 +0000 UTC m=+152.169038926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.318624 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.358273 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tqxz6"] Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.370349 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnhz8\" (UniqueName: \"kubernetes.io/projected/465b9d5a-3865-4d08-8528-f26c88c00198-kube-api-access-bnhz8\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.370447 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-utilities\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.370476 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.370503 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-catalog-content\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.371582 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-catalog-content\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.372083 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-utilities\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.372265 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.872248497 +0000 UTC m=+152.272833201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.401037 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnhz8\" (UniqueName: \"kubernetes.io/projected/465b9d5a-3865-4d08-8528-f26c88c00198-kube-api-access-bnhz8\") pod \"community-operators-6mf4w\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.420850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b9e90fc-fdbd-4047-9c64-03e12afb14a4","Type":"ContainerStarted","Data":"8dc755007cb0bf28fe65282dba08618eb0815d404658e06b8dfdb3943a1371a2"} Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.420969 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b9e90fc-fdbd-4047-9c64-03e12afb14a4","Type":"ContainerStarted","Data":"ebdcb0d61538effdc12c6decdf9755d7dffa059adfd6ccab1dfc581f12a5d031"} Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.464635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" event={"ID":"bac6ed8f-e181-484c-861d-36ba4b695bca","Type":"ContainerStarted","Data":"fb307904f09b94b77bc2140b03d820fd040092cb621ed68942f19dfa77510602"} Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.465058 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.473332 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.473741 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:09.973719957 +0000 UTC m=+152.374304601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.536167 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:09 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:09 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:09 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.536253 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.555582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.591562 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.592546 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.092526442 +0000 UTC m=+152.493111086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.620356 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.620327911 podStartE2EDuration="2.620327911s" podCreationTimestamp="2026-01-29 16:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:09.45703544 +0000 UTC m=+151.857620104" watchObservedRunningTime="2026-01-29 16:37:09.620327911 +0000 UTC m=+152.020912555" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.621038 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8w7wb"] Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.692850 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.693251 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.193232291 +0000 UTC m=+152.593816935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.795023 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.795406 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.29538611 +0000 UTC m=+152.695970754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.895896 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.896645 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.396627214 +0000 UTC m=+152.797211858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.972955 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:37:09 crc kubenswrapper[4746]: I0129 16:37:09.998801 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:09 crc kubenswrapper[4746]: E0129 16:37:09.999241 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.499227216 +0000 UTC m=+152.899811860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.021159 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rxvtt"] Jan 29 16:37:10 crc kubenswrapper[4746]: W0129 16:37:10.064904 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d23c8d2_30ea_475c_b864_7d293459d078.slice/crio-c51f67bc8f5eef3b2b07075e2400f296e9c87d78f35e70357fb026f2cc93b890 WatchSource:0}: Error finding container c51f67bc8f5eef3b2b07075e2400f296e9c87d78f35e70357fb026f2cc93b890: Status 404 returned error can't find the container with id c51f67bc8f5eef3b2b07075e2400f296e9c87d78f35e70357fb026f2cc93b890 Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.099923 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.099983 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cckbs\" (UniqueName: \"kubernetes.io/projected/7becf4a7-7ad1-4d20-9707-a28330253dfd-kube-api-access-cckbs\") pod \"7becf4a7-7ad1-4d20-9707-a28330253dfd\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.100064 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7becf4a7-7ad1-4d20-9707-a28330253dfd-secret-volume\") pod \"7becf4a7-7ad1-4d20-9707-a28330253dfd\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.100095 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume\") pod \"7becf4a7-7ad1-4d20-9707-a28330253dfd\" (UID: \"7becf4a7-7ad1-4d20-9707-a28330253dfd\") " Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.101409 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume" (OuterVolumeSpecName: "config-volume") pod "7becf4a7-7ad1-4d20-9707-a28330253dfd" (UID: "7becf4a7-7ad1-4d20-9707-a28330253dfd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.105312 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.605292955 +0000 UTC m=+153.005877599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.116286 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7becf4a7-7ad1-4d20-9707-a28330253dfd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7becf4a7-7ad1-4d20-9707-a28330253dfd" (UID: "7becf4a7-7ad1-4d20-9707-a28330253dfd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.117725 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7becf4a7-7ad1-4d20-9707-a28330253dfd-kube-api-access-cckbs" (OuterVolumeSpecName: "kube-api-access-cckbs") pod "7becf4a7-7ad1-4d20-9707-a28330253dfd" (UID: "7becf4a7-7ad1-4d20-9707-a28330253dfd"). InnerVolumeSpecName "kube-api-access-cckbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.186005 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6mf4w"] Jan 29 16:37:10 crc kubenswrapper[4746]: W0129 16:37:10.201683 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod465b9d5a_3865_4d08_8528_f26c88c00198.slice/crio-30e26e0fec94f988b6adbd5fc82735f74b0985adf5a6eb8cf7f90f910511143d WatchSource:0}: Error finding container 30e26e0fec94f988b6adbd5fc82735f74b0985adf5a6eb8cf7f90f910511143d: Status 404 returned error can't find the container with id 30e26e0fec94f988b6adbd5fc82735f74b0985adf5a6eb8cf7f90f910511143d Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.202014 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.202122 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7becf4a7-7ad1-4d20-9707-a28330253dfd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.202135 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7becf4a7-7ad1-4d20-9707-a28330253dfd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.202149 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cckbs\" (UniqueName: \"kubernetes.io/projected/7becf4a7-7ad1-4d20-9707-a28330253dfd-kube-api-access-cckbs\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.202469 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.702451464 +0000 UTC m=+153.103036108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.303426 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.303703 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.803651397 +0000 UTC m=+153.204236041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.304081 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.304457 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.804443929 +0000 UTC m=+153.205028563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.316672 4746 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.405552 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.405949 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:10.90592805 +0000 UTC m=+153.306512694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.485995 4746 generic.go:334] "Generic (PLEG): container finished" podID="5b9e90fc-fdbd-4047-9c64-03e12afb14a4" containerID="8dc755007cb0bf28fe65282dba08618eb0815d404658e06b8dfdb3943a1371a2" exitCode=0 Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.486107 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b9e90fc-fdbd-4047-9c64-03e12afb14a4","Type":"ContainerDied","Data":"8dc755007cb0bf28fe65282dba08618eb0815d404658e06b8dfdb3943a1371a2"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.487894 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" event={"ID":"7becf4a7-7ad1-4d20-9707-a28330253dfd","Type":"ContainerDied","Data":"9626dfc2ef7c51c0cd106bec9ed1ac46b8671a799fc62b018dafa5655841f22c"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.487950 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.489219 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9626dfc2ef7c51c0cd106bec9ed1ac46b8671a799fc62b018dafa5655841f22c" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.491001 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerID="61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d" exitCode=0 Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.491278 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerDied","Data":"61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.491304 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerStarted","Data":"271188565ba6560f1e40b5cdc795fda5228931abe3d9a8af55d7c6872b38d539"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.495155 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerStarted","Data":"83f359fceb43399700fea39d9897f4a71e0a7e75d99799f0b16cde0ba5648445"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.496077 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerStarted","Data":"30e26e0fec94f988b6adbd5fc82735f74b0985adf5a6eb8cf7f90f910511143d"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.496332 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.506015 4746 generic.go:334] "Generic (PLEG): container finished" podID="8b74b912-b845-497d-8566-6975dc1fdce5" containerID="6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb" exitCode=0 Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.506324 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerDied","Data":"6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.506362 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerStarted","Data":"675719dc6ef108ae8af9842143fcaf233144201bfe64d617bb1d46f36f719be7"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.509418 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.509943 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:11.00992588 +0000 UTC m=+153.410510524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.523565 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" event={"ID":"bac6ed8f-e181-484c-861d-36ba4b695bca","Type":"ContainerStarted","Data":"24df3680030eba74dcbf004c1189c9bad0de43eeb81822b56c8994b1775ff379"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.523643 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" event={"ID":"bac6ed8f-e181-484c-861d-36ba4b695bca","Type":"ContainerStarted","Data":"0043782ecec1d27628cae62ea2694a3d40419d464460111de4df7c790a8ddf66"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.529085 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.530792 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerStarted","Data":"72de6574723310ee7780bb7cad8f9308f14c5cadede07bfdf5e6a93b76015600"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.530826 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerStarted","Data":"c51f67bc8f5eef3b2b07075e2400f296e9c87d78f35e70357fb026f2cc93b890"} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.532696 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:10 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:10 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:10 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.532746 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.536513 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.536539 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.537141 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.537169 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.610742 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.610968 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:37:11.110930927 +0000 UTC m=+153.511515571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.611732 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.612222 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:37:11.112205183 +0000 UTC m=+153.512789828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-t9srx" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.613148 4746 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T16:37:10.316708382Z","Handler":null,"Name":""} Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.617292 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dd6kd" podStartSLOduration=13.617280516 podStartE2EDuration="13.617280516s" podCreationTimestamp="2026-01-29 16:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:10.61492452 +0000 UTC m=+153.015509184" watchObservedRunningTime="2026-01-29 16:37:10.617280516 +0000 UTC m=+153.017865160" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.620208 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.631486 4746 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.631536 4746 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.713114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.723005 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.769030 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x89zf"] Jan 29 16:37:10 crc kubenswrapper[4746]: E0129 16:37:10.769327 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7becf4a7-7ad1-4d20-9707-a28330253dfd" containerName="collect-profiles" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.769343 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7becf4a7-7ad1-4d20-9707-a28330253dfd" containerName="collect-profiles" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.770371 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7becf4a7-7ad1-4d20-9707-a28330253dfd" containerName="collect-profiles" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.773754 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.777979 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.795069 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x89zf"] Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.815603 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.819134 4746 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.819207 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.855548 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-t9srx\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.888005 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.888088 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.890059 4746 patch_prober.go:28] interesting pod/apiserver-76f77b778f-wcd6d container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.25:8443/livez\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.890107 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" podUID="f1442f3f-48b3-4356-bcb0-773b64ccab8f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.25:8443/livez\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.907557 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.916573 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82tk\" (UniqueName: \"kubernetes.io/projected/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-kube-api-access-p82tk\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.916619 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-utilities\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.916696 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-catalog-content\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.943720 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.944475 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.946105 4746 patch_prober.go:28] interesting pod/console-f9d7485db-np5s4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 16:37:10 crc kubenswrapper[4746]: I0129 16:37:10.946157 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-np5s4" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.021891 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-catalog-content\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.022005 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82tk\" (UniqueName: \"kubernetes.io/projected/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-kube-api-access-p82tk\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.022037 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-utilities\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.026233 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-utilities\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.028581 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-catalog-content\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.056791 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82tk\" (UniqueName: \"kubernetes.io/projected/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-kube-api-access-p82tk\") pod \"redhat-marketplace-x89zf\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.096418 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.169353 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vqqwl"] Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.171467 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.173657 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t9srx"] Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.186389 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqqwl"] Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.327776 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-catalog-content\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.327916 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl9w9\" (UniqueName: \"kubernetes.io/projected/66f7193e-02df-4eb7-a8af-292ff608c945-kube-api-access-kl9w9\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.327964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-utilities\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.378790 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x89zf"] Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.429671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl9w9\" (UniqueName: \"kubernetes.io/projected/66f7193e-02df-4eb7-a8af-292ff608c945-kube-api-access-kl9w9\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.429745 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-utilities\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.429799 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-catalog-content\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.430970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-catalog-content\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.431707 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-utilities\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.481775 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl9w9\" (UniqueName: \"kubernetes.io/projected/66f7193e-02df-4eb7-a8af-292ff608c945-kube-api-access-kl9w9\") pod \"redhat-marketplace-vqqwl\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.503859 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.532719 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:11 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:11 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:11 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.532806 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.546826 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" event={"ID":"27c3b17b-1acd-412d-90eb-5782d6db606e","Type":"ContainerStarted","Data":"f44f051a3c891dca641f95b919e060c47aabc03caa0c67279c2d91cf8d6d1364"} Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.546895 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" event={"ID":"27c3b17b-1acd-412d-90eb-5782d6db606e","Type":"ContainerStarted","Data":"24c3b9155ea2eb095c0be8a748bf66a9d646a012065c7e037721fd88658dbe57"} Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.547558 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.550756 4746 generic.go:334] "Generic (PLEG): container finished" podID="9d23c8d2-30ea-475c-b864-7d293459d078" containerID="72de6574723310ee7780bb7cad8f9308f14c5cadede07bfdf5e6a93b76015600" exitCode=0 Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.550850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerDied","Data":"72de6574723310ee7780bb7cad8f9308f14c5cadede07bfdf5e6a93b76015600"} Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.552388 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x89zf" event={"ID":"b36b404d-6a34-46bf-a5c8-d4322e3ffc07","Type":"ContainerStarted","Data":"739b80845a65c5e19ba26dff4bc2d5504cdb0e921690158be4addf936b59563b"} Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.555321 4746 generic.go:334] "Generic (PLEG): container finished" podID="465b9d5a-3865-4d08-8528-f26c88c00198" containerID="83f359fceb43399700fea39d9897f4a71e0a7e75d99799f0b16cde0ba5648445" exitCode=0 Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.555528 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerDied","Data":"83f359fceb43399700fea39d9897f4a71e0a7e75d99799f0b16cde0ba5648445"} Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.568592 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" podStartSLOduration=133.568569972 podStartE2EDuration="2m13.568569972s" podCreationTimestamp="2026-01-29 16:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:11.566699729 +0000 UTC m=+153.967284383" watchObservedRunningTime="2026-01-29 16:37:11.568569972 +0000 UTC m=+153.969154626" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.781593 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g5lkm"] Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.783245 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.786449 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.807297 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5lkm"] Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.824755 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.867346 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqqwl"] Jan 29 16:37:11 crc kubenswrapper[4746]: W0129 16:37:11.883986 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66f7193e_02df_4eb7_a8af_292ff608c945.slice/crio-913755dae6a5789f4c75f653c2a29178c5fd0f065223d3d51000e5300c687de9 WatchSource:0}: Error finding container 913755dae6a5789f4c75f653c2a29178c5fd0f065223d3d51000e5300c687de9: Status 404 returned error can't find the container with id 913755dae6a5789f4c75f653c2a29178c5fd0f065223d3d51000e5300c687de9 Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.937858 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kube-api-access\") pod \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.937935 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kubelet-dir\") pod \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\" (UID: \"5b9e90fc-fdbd-4047-9c64-03e12afb14a4\") " Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.938252 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-utilities\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.938318 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dmw\" (UniqueName: \"kubernetes.io/projected/989fe817-0cfd-4b55-aaaa-dd31bb39f219-kube-api-access-m8dmw\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.938369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-catalog-content\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.938912 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5b9e90fc-fdbd-4047-9c64-03e12afb14a4" (UID: "5b9e90fc-fdbd-4047-9c64-03e12afb14a4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:37:11 crc kubenswrapper[4746]: I0129 16:37:11.978511 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5b9e90fc-fdbd-4047-9c64-03e12afb14a4" (UID: "5b9e90fc-fdbd-4047-9c64-03e12afb14a4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.040394 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-utilities\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.040448 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dmw\" (UniqueName: \"kubernetes.io/projected/989fe817-0cfd-4b55-aaaa-dd31bb39f219-kube-api-access-m8dmw\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.040493 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-catalog-content\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.040559 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.040574 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b9e90fc-fdbd-4047-9c64-03e12afb14a4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.041049 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-utilities\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.041161 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-catalog-content\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.061488 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dmw\" (UniqueName: \"kubernetes.io/projected/989fe817-0cfd-4b55-aaaa-dd31bb39f219-kube-api-access-m8dmw\") pod \"redhat-operators-g5lkm\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.106002 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.173625 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94jzq"] Jan 29 16:37:12 crc kubenswrapper[4746]: E0129 16:37:12.173915 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9e90fc-fdbd-4047-9c64-03e12afb14a4" containerName="pruner" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.173932 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9e90fc-fdbd-4047-9c64-03e12afb14a4" containerName="pruner" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.174069 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9e90fc-fdbd-4047-9c64-03e12afb14a4" containerName="pruner" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.175084 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.198029 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94jzq"] Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.349015 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-utilities\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.349589 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwg9x\" (UniqueName: \"kubernetes.io/projected/ddce9ca9-703b-4142-8256-2eb692e9965d-kube-api-access-qwg9x\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.349683 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-catalog-content\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.451438 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwg9x\" (UniqueName: \"kubernetes.io/projected/ddce9ca9-703b-4142-8256-2eb692e9965d-kube-api-access-qwg9x\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.451584 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-catalog-content\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.451623 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-utilities\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.452258 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-catalog-content\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.452335 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-utilities\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.477095 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.494620 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwg9x\" (UniqueName: \"kubernetes.io/projected/ddce9ca9-703b-4142-8256-2eb692e9965d-kube-api-access-qwg9x\") pod \"redhat-operators-94jzq\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.532687 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:12 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:12 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:12 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.532773 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.574343 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5b9e90fc-fdbd-4047-9c64-03e12afb14a4","Type":"ContainerDied","Data":"ebdcb0d61538effdc12c6decdf9755d7dffa059adfd6ccab1dfc581f12a5d031"} Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.574607 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebdcb0d61538effdc12c6decdf9755d7dffa059adfd6ccab1dfc581f12a5d031" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.574645 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.577930 4746 generic.go:334] "Generic (PLEG): container finished" podID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerID="8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45" exitCode=0 Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.578021 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x89zf" event={"ID":"b36b404d-6a34-46bf-a5c8-d4322e3ffc07","Type":"ContainerDied","Data":"8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45"} Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.601545 4746 generic.go:334] "Generic (PLEG): container finished" podID="66f7193e-02df-4eb7-a8af-292ff608c945" containerID="ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e" exitCode=0 Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.601729 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerDied","Data":"ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e"} Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.601771 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerStarted","Data":"913755dae6a5789f4c75f653c2a29178c5fd0f065223d3d51000e5300c687de9"} Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.641111 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5lkm"] Jan 29 16:37:12 crc kubenswrapper[4746]: I0129 16:37:12.649859 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:37:12 crc kubenswrapper[4746]: W0129 16:37:12.702037 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod989fe817_0cfd_4b55_aaaa_dd31bb39f219.slice/crio-dc8246597112be5d9f2d4bef6610d1ac120a9d198570a9a20acd634d4a23235c WatchSource:0}: Error finding container dc8246597112be5d9f2d4bef6610d1ac120a9d198570a9a20acd634d4a23235c: Status 404 returned error can't find the container with id dc8246597112be5d9f2d4bef6610d1ac120a9d198570a9a20acd634d4a23235c Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.230605 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94jzq"] Jan 29 16:37:13 crc kubenswrapper[4746]: W0129 16:37:13.260412 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddce9ca9_703b_4142_8256_2eb692e9965d.slice/crio-8c4a2c0c116b9b3b0beec14fca00996d460a5915d90a4d377958f0524b49c21d WatchSource:0}: Error finding container 8c4a2c0c116b9b3b0beec14fca00996d460a5915d90a4d377958f0524b49c21d: Status 404 returned error can't find the container with id 8c4a2c0c116b9b3b0beec14fca00996d460a5915d90a4d377958f0524b49c21d Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.540282 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:13 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:13 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:13 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.540349 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.615535 4746 generic.go:334] "Generic (PLEG): container finished" podID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerID="907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9" exitCode=0 Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.615619 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerDied","Data":"907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9"} Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.615662 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerStarted","Data":"dc8246597112be5d9f2d4bef6610d1ac120a9d198570a9a20acd634d4a23235c"} Jan 29 16:37:13 crc kubenswrapper[4746]: I0129 16:37:13.619091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerStarted","Data":"8c4a2c0c116b9b3b0beec14fca00996d460a5915d90a4d377958f0524b49c21d"} Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.349213 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.350292 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.356952 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.356956 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.358787 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.495745 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b28afa38-df92-4dbf-8b8b-9664102d67f2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.495965 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b28afa38-df92-4dbf-8b8b-9664102d67f2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.531544 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:14 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:14 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:14 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.531669 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.597850 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b28afa38-df92-4dbf-8b8b-9664102d67f2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.599356 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b28afa38-df92-4dbf-8b8b-9664102d67f2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.599443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b28afa38-df92-4dbf-8b8b-9664102d67f2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.629975 4746 generic.go:334] "Generic (PLEG): container finished" podID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerID="672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333" exitCode=0 Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.630567 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerDied","Data":"672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333"} Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.656157 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b28afa38-df92-4dbf-8b8b-9664102d67f2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.683683 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:14 crc kubenswrapper[4746]: I0129 16:37:14.961488 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:37:14 crc kubenswrapper[4746]: W0129 16:37:14.974762 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb28afa38_df92_4dbf_8b8b_9664102d67f2.slice/crio-150604f1bc1b600c707d425e3e4d607850e948c69a1673cbb4c40687181fe148 WatchSource:0}: Error finding container 150604f1bc1b600c707d425e3e4d607850e948c69a1673cbb4c40687181fe148: Status 404 returned error can't find the container with id 150604f1bc1b600c707d425e3e4d607850e948c69a1673cbb4c40687181fe148 Jan 29 16:37:15 crc kubenswrapper[4746]: I0129 16:37:15.534671 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:15 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:15 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:15 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:15 crc kubenswrapper[4746]: I0129 16:37:15.535102 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:15 crc kubenswrapper[4746]: I0129 16:37:15.644081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b28afa38-df92-4dbf-8b8b-9664102d67f2","Type":"ContainerStarted","Data":"150604f1bc1b600c707d425e3e4d607850e948c69a1673cbb4c40687181fe148"} Jan 29 16:37:15 crc kubenswrapper[4746]: I0129 16:37:15.895216 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:37:15 crc kubenswrapper[4746]: I0129 16:37:15.902089 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-wcd6d" Jan 29 16:37:16 crc kubenswrapper[4746]: I0129 16:37:16.027692 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2shkn" Jan 29 16:37:16 crc kubenswrapper[4746]: I0129 16:37:16.531706 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:16 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:16 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:16 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:16 crc kubenswrapper[4746]: I0129 16:37:16.532590 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:16 crc kubenswrapper[4746]: I0129 16:37:16.671986 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b28afa38-df92-4dbf-8b8b-9664102d67f2","Type":"ContainerStarted","Data":"f79751c6dc297521d250cba2c55f3186355e265a2d17791c45eca6c36ac6f226"} Jan 29 16:37:16 crc kubenswrapper[4746]: I0129 16:37:16.703170 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.703147672 podStartE2EDuration="2.703147672s" podCreationTimestamp="2026-01-29 16:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:16.700770205 +0000 UTC m=+159.101354859" watchObservedRunningTime="2026-01-29 16:37:16.703147672 +0000 UTC m=+159.103732316" Jan 29 16:37:17 crc kubenswrapper[4746]: I0129 16:37:17.533251 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:17 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:17 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:17 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:17 crc kubenswrapper[4746]: I0129 16:37:17.533565 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:17 crc kubenswrapper[4746]: I0129 16:37:17.698104 4746 generic.go:334] "Generic (PLEG): container finished" podID="b28afa38-df92-4dbf-8b8b-9664102d67f2" containerID="f79751c6dc297521d250cba2c55f3186355e265a2d17791c45eca6c36ac6f226" exitCode=0 Jan 29 16:37:17 crc kubenswrapper[4746]: I0129 16:37:17.698161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b28afa38-df92-4dbf-8b8b-9664102d67f2","Type":"ContainerDied","Data":"f79751c6dc297521d250cba2c55f3186355e265a2d17791c45eca6c36ac6f226"} Jan 29 16:37:18 crc kubenswrapper[4746]: I0129 16:37:18.532872 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:18 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:18 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:18 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:18 crc kubenswrapper[4746]: I0129 16:37:18.533230 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:18 crc kubenswrapper[4746]: I0129 16:37:18.967405 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.019612 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b28afa38-df92-4dbf-8b8b-9664102d67f2-kubelet-dir\") pod \"b28afa38-df92-4dbf-8b8b-9664102d67f2\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.019770 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b28afa38-df92-4dbf-8b8b-9664102d67f2-kube-api-access\") pod \"b28afa38-df92-4dbf-8b8b-9664102d67f2\" (UID: \"b28afa38-df92-4dbf-8b8b-9664102d67f2\") " Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.019884 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b28afa38-df92-4dbf-8b8b-9664102d67f2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b28afa38-df92-4dbf-8b8b-9664102d67f2" (UID: "b28afa38-df92-4dbf-8b8b-9664102d67f2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.020351 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b28afa38-df92-4dbf-8b8b-9664102d67f2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.029173 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28afa38-df92-4dbf-8b8b-9664102d67f2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b28afa38-df92-4dbf-8b8b-9664102d67f2" (UID: "b28afa38-df92-4dbf-8b8b-9664102d67f2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.065804 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.065894 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.121951 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b28afa38-df92-4dbf-8b8b-9664102d67f2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.531155 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:19 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:19 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:19 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.531641 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.715374 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b28afa38-df92-4dbf-8b8b-9664102d67f2","Type":"ContainerDied","Data":"150604f1bc1b600c707d425e3e4d607850e948c69a1673cbb4c40687181fe148"} Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.715429 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150604f1bc1b600c707d425e3e4d607850e948c69a1673cbb4c40687181fe148" Jan 29 16:37:19 crc kubenswrapper[4746]: I0129 16:37:19.715560 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.533432 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:20 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:20 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:20 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.533567 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.536043 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.536111 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.536311 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.536334 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.549280 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.561337 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed3cddee-6243-41b8-9ac3-7ef6772d2960-metrics-certs\") pod \"network-metrics-daemon-f72wn\" (UID: \"ed3cddee-6243-41b8-9ac3-7ef6772d2960\") " pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.695969 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-f72wn" Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.944197 4746 patch_prober.go:28] interesting pod/console-f9d7485db-np5s4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 29 16:37:20 crc kubenswrapper[4746]: I0129 16:37:20.944286 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-np5s4" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 29 16:37:21 crc kubenswrapper[4746]: I0129 16:37:21.533633 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:21 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 29 16:37:21 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:21 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:21 crc kubenswrapper[4746]: I0129 16:37:21.533730 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:22 crc kubenswrapper[4746]: I0129 16:37:22.531092 4746 patch_prober.go:28] interesting pod/router-default-5444994796-ggm4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:37:22 crc kubenswrapper[4746]: [+]has-synced ok Jan 29 16:37:22 crc kubenswrapper[4746]: [+]process-running ok Jan 29 16:37:22 crc kubenswrapper[4746]: healthz check failed Jan 29 16:37:22 crc kubenswrapper[4746]: I0129 16:37:22.531201 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ggm4h" podUID="2c7650ca-1e87-4a25-8a8e-dae70ea5719c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:37:23 crc kubenswrapper[4746]: I0129 16:37:23.532681 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:37:23 crc kubenswrapper[4746]: I0129 16:37:23.536801 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ggm4h" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.535722 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.536235 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.535770 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.536321 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.536370 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.536854 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.536935 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.537100 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"cdfdb7ef63b4df260697ddbf2748c93c5cfc25fa83ea4dcef0ec7bdc5407eaba"} pod="openshift-console/downloads-7954f5f757-9v9dn" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.537216 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" containerID="cri-o://cdfdb7ef63b4df260697ddbf2748c93c5cfc25fa83ea4dcef0ec7bdc5407eaba" gracePeriod=2 Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.914915 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.963169 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:37:30 crc kubenswrapper[4746]: I0129 16:37:30.972264 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:37:31 crc kubenswrapper[4746]: I0129 16:37:31.803548 4746 generic.go:334] "Generic (PLEG): container finished" podID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerID="cdfdb7ef63b4df260697ddbf2748c93c5cfc25fa83ea4dcef0ec7bdc5407eaba" exitCode=0 Jan 29 16:37:31 crc kubenswrapper[4746]: I0129 16:37:31.803660 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9v9dn" event={"ID":"939b72c6-643d-4e50-8223-7596ca0c5a6a","Type":"ContainerDied","Data":"cdfdb7ef63b4df260697ddbf2748c93c5cfc25fa83ea4dcef0ec7bdc5407eaba"} Jan 29 16:37:34 crc kubenswrapper[4746]: E0129 16:37:34.706413 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:37:34 crc kubenswrapper[4746]: E0129 16:37:34.707315 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56q9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tqxz6_openshift-marketplace(d1bf7638-7d83-4b72-addf-51bae49b7390): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:37:34 crc kubenswrapper[4746]: E0129 16:37:34.708478 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tqxz6" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" Jan 29 16:37:37 crc kubenswrapper[4746]: E0129 16:37:37.495293 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tqxz6" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" Jan 29 16:37:40 crc kubenswrapper[4746]: E0129 16:37:40.370537 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:37:40 crc kubenswrapper[4746]: E0129 16:37:40.371664 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnhz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6mf4w_openshift-marketplace(465b9d5a-3865-4d08-8528-f26c88c00198): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:37:40 crc kubenswrapper[4746]: E0129 16:37:40.373086 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6mf4w" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" Jan 29 16:37:40 crc kubenswrapper[4746]: I0129 16:37:40.535578 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:40 crc kubenswrapper[4746]: I0129 16:37:40.535649 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:40 crc kubenswrapper[4746]: I0129 16:37:40.688065 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rmcph" Jan 29 16:37:42 crc kubenswrapper[4746]: E0129 16:37:42.704021 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:37:42 crc kubenswrapper[4746]: E0129 16:37:42.704302 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vhhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8w7wb_openshift-marketplace(8b74b912-b845-497d-8566-6975dc1fdce5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:37:42 crc kubenswrapper[4746]: E0129 16:37:42.705595 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8w7wb" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" Jan 29 16:37:47 crc kubenswrapper[4746]: I0129 16:37:47.364282 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.749852 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:37:48 crc kubenswrapper[4746]: E0129 16:37:48.750479 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28afa38-df92-4dbf-8b8b-9664102d67f2" containerName="pruner" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.750496 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28afa38-df92-4dbf-8b8b-9664102d67f2" containerName="pruner" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.750610 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28afa38-df92-4dbf-8b8b-9664102d67f2" containerName="pruner" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.751010 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.753572 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.755271 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.762090 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.780006 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.780070 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.881294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.881601 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.881731 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:48 crc kubenswrapper[4746]: I0129 16:37:48.908751 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:49 crc kubenswrapper[4746]: I0129 16:37:49.065111 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:49 crc kubenswrapper[4746]: I0129 16:37:49.065296 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:49 crc kubenswrapper[4746]: I0129 16:37:49.071758 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:37:50 crc kubenswrapper[4746]: I0129 16:37:50.537882 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:50 crc kubenswrapper[4746]: I0129 16:37:50.539425 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.653059 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8w7wb" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.682200 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.682777 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwg9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-94jzq_openshift-marketplace(ddce9ca9-703b-4142-8256-2eb692e9965d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.684152 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-94jzq" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.684456 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.684644 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8dmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-g5lkm_openshift-marketplace(989fe817-0cfd-4b55-aaaa-dd31bb39f219): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.687613 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-g5lkm" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" Jan 29 16:37:52 crc kubenswrapper[4746]: I0129 16:37:52.946137 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:37:52 crc kubenswrapper[4746]: I0129 16:37:52.947335 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:52 crc kubenswrapper[4746]: I0129 16:37:52.960140 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.996440 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-g5lkm" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" Jan 29 16:37:52 crc kubenswrapper[4746]: E0129 16:37:52.996968 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-94jzq" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.049004 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e349656c-1d27-4785-9d19-ae7ee47808f9-kube-api-access\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.049105 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-var-lock\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.049142 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.068064 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-f72wn"] Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.150843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-var-lock\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.150977 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.151021 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-var-lock\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.151163 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.151067 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e349656c-1d27-4785-9d19-ae7ee47808f9-kube-api-access\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.179705 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e349656c-1d27-4785-9d19-ae7ee47808f9-kube-api-access\") pod \"installer-9-crc\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.272997 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:37:53 crc kubenswrapper[4746]: I0129 16:37:53.952404 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f72wn" event={"ID":"ed3cddee-6243-41b8-9ac3-7ef6772d2960","Type":"ContainerStarted","Data":"1c0f4ccbb7a983ae92be3052bedf6f0dac8381bd916ac833d37416c3e739d003"} Jan 29 16:37:54 crc kubenswrapper[4746]: I0129 16:37:54.065574 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:37:54 crc kubenswrapper[4746]: W0129 16:37:54.081553 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod24a4c7fc_a0a0_4297_8abe_b8f9f6575a8a.slice/crio-089f44b633796744f613d923a0394162d842d185f394cee02d5162d509bdf756 WatchSource:0}: Error finding container 089f44b633796744f613d923a0394162d842d185f394cee02d5162d509bdf756: Status 404 returned error can't find the container with id 089f44b633796744f613d923a0394162d842d185f394cee02d5162d509bdf756 Jan 29 16:37:54 crc kubenswrapper[4746]: I0129 16:37:54.965684 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a","Type":"ContainerStarted","Data":"089f44b633796744f613d923a0394162d842d185f394cee02d5162d509bdf756"} Jan 29 16:37:54 crc kubenswrapper[4746]: I0129 16:37:54.969263 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9v9dn" event={"ID":"939b72c6-643d-4e50-8223-7596ca0c5a6a","Type":"ContainerStarted","Data":"451a3b2b873bf953d48a45ea5fe3be1d7cbe186f86d59d7e0b4773b41c698694"} Jan 29 16:37:54 crc kubenswrapper[4746]: I0129 16:37:54.969678 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:37:54 crc kubenswrapper[4746]: I0129 16:37:54.970991 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:54 crc kubenswrapper[4746]: I0129 16:37:54.971045 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:55 crc kubenswrapper[4746]: I0129 16:37:55.989737 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:37:55 crc kubenswrapper[4746]: I0129 16:37:55.990230 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:37:56 crc kubenswrapper[4746]: I0129 16:37:56.191395 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:37:56 crc kubenswrapper[4746]: W0129 16:37:56.209217 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode349656c_1d27_4785_9d19_ae7ee47808f9.slice/crio-83d974bb7874328c4b69c4ffa5a9c960c2e6d454ac650e2748a7c300f7c5ca72 WatchSource:0}: Error finding container 83d974bb7874328c4b69c4ffa5a9c960c2e6d454ac650e2748a7c300f7c5ca72: Status 404 returned error can't find the container with id 83d974bb7874328c4b69c4ffa5a9c960c2e6d454ac650e2748a7c300f7c5ca72 Jan 29 16:37:56 crc kubenswrapper[4746]: I0129 16:37:56.995650 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f72wn" event={"ID":"ed3cddee-6243-41b8-9ac3-7ef6772d2960","Type":"ContainerStarted","Data":"4ffaadf392875b77e0396be39a61cd665fb8ea39a972aac972d9e0460867885d"} Jan 29 16:37:56 crc kubenswrapper[4746]: I0129 16:37:56.997468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e349656c-1d27-4785-9d19-ae7ee47808f9","Type":"ContainerStarted","Data":"367c324a91b4322d7db21a9c063f1198237756ad51670fcfc820a0daf7adf1a6"} Jan 29 16:37:56 crc kubenswrapper[4746]: I0129 16:37:56.997518 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e349656c-1d27-4785-9d19-ae7ee47808f9","Type":"ContainerStarted","Data":"83d974bb7874328c4b69c4ffa5a9c960c2e6d454ac650e2748a7c300f7c5ca72"} Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.001360 4746 generic.go:334] "Generic (PLEG): container finished" podID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerID="c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5" exitCode=0 Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.001452 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x89zf" event={"ID":"b36b404d-6a34-46bf-a5c8-d4322e3ffc07","Type":"ContainerDied","Data":"c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5"} Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.003587 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerStarted","Data":"b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800"} Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.005489 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a","Type":"ContainerStarted","Data":"e52be475f4f7ff7c1a4562628245e97eac89a2b08a9d5cbd0100b29055b0ee14"} Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.010848 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerStarted","Data":"fd48b3d22d6e96ffcd190b860a5e1054aea57ca51a92be644df92962ccdb9861"} Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.019087 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.019064 podStartE2EDuration="5.019064s" podCreationTimestamp="2026-01-29 16:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:57.014952655 +0000 UTC m=+199.415537299" watchObservedRunningTime="2026-01-29 16:37:57.019064 +0000 UTC m=+199.419648644" Jan 29 16:37:57 crc kubenswrapper[4746]: I0129 16:37:57.066260 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=9.06623649 podStartE2EDuration="9.06623649s" podCreationTimestamp="2026-01-29 16:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:37:57.063637732 +0000 UTC m=+199.464222376" watchObservedRunningTime="2026-01-29 16:37:57.06623649 +0000 UTC m=+199.466821134" Jan 29 16:38:00 crc kubenswrapper[4746]: I0129 16:38:00.535619 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:38:00 crc kubenswrapper[4746]: I0129 16:38:00.536004 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:38:00 crc kubenswrapper[4746]: I0129 16:38:00.539495 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-9v9dn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 29 16:38:00 crc kubenswrapper[4746]: I0129 16:38:00.539544 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9v9dn" podUID="939b72c6-643d-4e50-8223-7596ca0c5a6a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 29 16:38:09 crc kubenswrapper[4746]: I0129 16:38:09.093666 4746 generic.go:334] "Generic (PLEG): container finished" podID="24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a" containerID="e52be475f4f7ff7c1a4562628245e97eac89a2b08a9d5cbd0100b29055b0ee14" exitCode=0 Jan 29 16:38:09 crc kubenswrapper[4746]: I0129 16:38:09.093806 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a","Type":"ContainerDied","Data":"e52be475f4f7ff7c1a4562628245e97eac89a2b08a9d5cbd0100b29055b0ee14"} Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.104317 4746 generic.go:334] "Generic (PLEG): container finished" podID="66f7193e-02df-4eb7-a8af-292ff608c945" containerID="b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800" exitCode=0 Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.104426 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerDied","Data":"b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800"} Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.108509 4746 generic.go:334] "Generic (PLEG): container finished" podID="9d23c8d2-30ea-475c-b864-7d293459d078" containerID="fd48b3d22d6e96ffcd190b860a5e1054aea57ca51a92be644df92962ccdb9861" exitCode=0 Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.108609 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerDied","Data":"fd48b3d22d6e96ffcd190b860a5e1054aea57ca51a92be644df92962ccdb9861"} Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.121977 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-f72wn" event={"ID":"ed3cddee-6243-41b8-9ac3-7ef6772d2960","Type":"ContainerStarted","Data":"4bce740010da604ddf2818acd4742457f055e98cc9fbe9003e4cad0adcbe6742"} Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.556812 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9v9dn" Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.801248 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.836457 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kubelet-dir\") pod \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.836603 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a" (UID: "24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.836681 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kube-api-access\") pod \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\" (UID: \"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a\") " Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.837063 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.845471 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a" (UID: "24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:10 crc kubenswrapper[4746]: I0129 16:38:10.938614 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:11 crc kubenswrapper[4746]: I0129 16:38:11.129316 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:38:11 crc kubenswrapper[4746]: I0129 16:38:11.129324 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a","Type":"ContainerDied","Data":"089f44b633796744f613d923a0394162d842d185f394cee02d5162d509bdf756"} Jan 29 16:38:11 crc kubenswrapper[4746]: I0129 16:38:11.129401 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089f44b633796744f613d923a0394162d842d185f394cee02d5162d509bdf756" Jan 29 16:38:11 crc kubenswrapper[4746]: I0129 16:38:11.148149 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-f72wn" podStartSLOduration=194.14812591 podStartE2EDuration="3m14.14812591s" podCreationTimestamp="2026-01-29 16:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:38:11.145666086 +0000 UTC m=+213.546250730" watchObservedRunningTime="2026-01-29 16:38:11.14812591 +0000 UTC m=+213.548710564" Jan 29 16:38:14 crc kubenswrapper[4746]: I0129 16:38:14.164614 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerStarted","Data":"9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6"} Jan 29 16:38:14 crc kubenswrapper[4746]: I0129 16:38:14.166722 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x89zf" event={"ID":"b36b404d-6a34-46bf-a5c8-d4322e3ffc07","Type":"ContainerStarted","Data":"51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f"} Jan 29 16:38:14 crc kubenswrapper[4746]: I0129 16:38:14.170019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerStarted","Data":"9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197"} Jan 29 16:38:14 crc kubenswrapper[4746]: I0129 16:38:14.207998 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vqqwl" podStartSLOduration=2.072043949 podStartE2EDuration="1m3.207975111s" podCreationTimestamp="2026-01-29 16:37:11 +0000 UTC" firstStartedPulling="2026-01-29 16:37:12.603704497 +0000 UTC m=+155.004289141" lastFinishedPulling="2026-01-29 16:38:13.739635659 +0000 UTC m=+216.140220303" observedRunningTime="2026-01-29 16:38:14.205673822 +0000 UTC m=+216.606258466" watchObservedRunningTime="2026-01-29 16:38:14.207975111 +0000 UTC m=+216.608559756" Jan 29 16:38:14 crc kubenswrapper[4746]: I0129 16:38:14.228313 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x89zf" podStartSLOduration=3.083529226 podStartE2EDuration="1m4.228287128s" podCreationTimestamp="2026-01-29 16:37:10 +0000 UTC" firstStartedPulling="2026-01-29 16:37:12.587084701 +0000 UTC m=+154.987669345" lastFinishedPulling="2026-01-29 16:38:13.731842603 +0000 UTC m=+216.132427247" observedRunningTime="2026-01-29 16:38:14.222403879 +0000 UTC m=+216.622988523" watchObservedRunningTime="2026-01-29 16:38:14.228287128 +0000 UTC m=+216.628871782" Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.183563 4746 generic.go:334] "Generic (PLEG): container finished" podID="465b9d5a-3865-4d08-8528-f26c88c00198" containerID="a510c736b07e781c4d13ec0bfd103b9db1e71a17cd25c295bb14902aeb687d2b" exitCode=0 Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.184788 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerDied","Data":"a510c736b07e781c4d13ec0bfd103b9db1e71a17cd25c295bb14902aeb687d2b"} Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.195687 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerStarted","Data":"31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2"} Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.198412 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerStarted","Data":"9b069aab46084d6137dd27a22a504e3941e6e94f268a3cba9cfaee6b8ac61f28"} Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.200601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerStarted","Data":"9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d"} Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.208865 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerStarted","Data":"ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560"} Jan 29 16:38:15 crc kubenswrapper[4746]: I0129 16:38:15.255688 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rxvtt" podStartSLOduration=3.8755128279999997 podStartE2EDuration="1m7.255663283s" podCreationTimestamp="2026-01-29 16:37:08 +0000 UTC" firstStartedPulling="2026-01-29 16:37:10.533095269 +0000 UTC m=+152.933679913" lastFinishedPulling="2026-01-29 16:38:13.913245714 +0000 UTC m=+216.313830368" observedRunningTime="2026-01-29 16:38:15.253630162 +0000 UTC m=+217.654214826" watchObservedRunningTime="2026-01-29 16:38:15.255663283 +0000 UTC m=+217.656247927" Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.221309 4746 generic.go:334] "Generic (PLEG): container finished" podID="8b74b912-b845-497d-8566-6975dc1fdce5" containerID="31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2" exitCode=0 Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.221408 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerDied","Data":"31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2"} Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.224118 4746 generic.go:334] "Generic (PLEG): container finished" podID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerID="9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6" exitCode=0 Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.224217 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerDied","Data":"9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6"} Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.232100 4746 generic.go:334] "Generic (PLEG): container finished" podID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerID="9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d" exitCode=0 Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.232203 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerDied","Data":"9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d"} Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.237814 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerID="ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560" exitCode=0 Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.237882 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerDied","Data":"ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560"} Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.245616 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerStarted","Data":"8975a787d0ac21693652446eb5461dbbf9fbcc8684c01632f9c4c51c024e5e74"} Jan 29 16:38:16 crc kubenswrapper[4746]: I0129 16:38:16.334805 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6mf4w" podStartSLOduration=2.130961954 podStartE2EDuration="1m7.334777117s" podCreationTimestamp="2026-01-29 16:37:09 +0000 UTC" firstStartedPulling="2026-01-29 16:37:10.496422452 +0000 UTC m=+152.897007096" lastFinishedPulling="2026-01-29 16:38:15.700237615 +0000 UTC m=+218.100822259" observedRunningTime="2026-01-29 16:38:16.331850298 +0000 UTC m=+218.732434942" watchObservedRunningTime="2026-01-29 16:38:16.334777117 +0000 UTC m=+218.735361761" Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.255089 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerStarted","Data":"a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477"} Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.258498 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerStarted","Data":"7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2"} Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.260837 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerStarted","Data":"90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8"} Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.263258 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerStarted","Data":"8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135"} Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.279779 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tqxz6" podStartSLOduration=2.922314418 podStartE2EDuration="1m9.279560768s" podCreationTimestamp="2026-01-29 16:37:08 +0000 UTC" firstStartedPulling="2026-01-29 16:37:10.49598537 +0000 UTC m=+152.896570014" lastFinishedPulling="2026-01-29 16:38:16.85323171 +0000 UTC m=+219.253816364" observedRunningTime="2026-01-29 16:38:17.274421942 +0000 UTC m=+219.675006586" watchObservedRunningTime="2026-01-29 16:38:17.279560768 +0000 UTC m=+219.680145402" Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.299132 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g5lkm" podStartSLOduration=3.193950306 podStartE2EDuration="1m6.299112831s" podCreationTimestamp="2026-01-29 16:37:11 +0000 UTC" firstStartedPulling="2026-01-29 16:37:13.621204457 +0000 UTC m=+156.021789091" lastFinishedPulling="2026-01-29 16:38:16.726366982 +0000 UTC m=+219.126951616" observedRunningTime="2026-01-29 16:38:17.295690077 +0000 UTC m=+219.696274721" watchObservedRunningTime="2026-01-29 16:38:17.299112831 +0000 UTC m=+219.699697475" Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.319839 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8w7wb" podStartSLOduration=3.067880871 podStartE2EDuration="1m9.319818609s" podCreationTimestamp="2026-01-29 16:37:08 +0000 UTC" firstStartedPulling="2026-01-29 16:37:10.509385665 +0000 UTC m=+152.909970309" lastFinishedPulling="2026-01-29 16:38:16.761323403 +0000 UTC m=+219.161908047" observedRunningTime="2026-01-29 16:38:17.316986533 +0000 UTC m=+219.717571177" watchObservedRunningTime="2026-01-29 16:38:17.319818609 +0000 UTC m=+219.720403253" Jan 29 16:38:17 crc kubenswrapper[4746]: I0129 16:38:17.340649 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94jzq" podStartSLOduration=3.105215897 podStartE2EDuration="1m5.340619959s" podCreationTimestamp="2026-01-29 16:37:12 +0000 UTC" firstStartedPulling="2026-01-29 16:37:14.666765212 +0000 UTC m=+157.067349856" lastFinishedPulling="2026-01-29 16:38:16.902169274 +0000 UTC m=+219.302753918" observedRunningTime="2026-01-29 16:38:17.336853675 +0000 UTC m=+219.737438319" watchObservedRunningTime="2026-01-29 16:38:17.340619959 +0000 UTC m=+219.741204603" Jan 29 16:38:18 crc kubenswrapper[4746]: I0129 16:38:18.987603 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:38:18 crc kubenswrapper[4746]: I0129 16:38:18.989033 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.065565 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.065661 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.065729 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.066479 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.066553 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f" gracePeriod=600 Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.099394 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.099724 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.320351 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.320929 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.563418 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.563485 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.574945 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.576835 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.584314 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:38:19 crc kubenswrapper[4746]: I0129 16:38:19.628837 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:38:20 crc kubenswrapper[4746]: I0129 16:38:20.282588 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f" exitCode=0 Jan 29 16:38:20 crc kubenswrapper[4746]: I0129 16:38:20.282778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f"} Jan 29 16:38:20 crc kubenswrapper[4746]: I0129 16:38:20.334548 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:38:20 crc kubenswrapper[4746]: I0129 16:38:20.336942 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.006842 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mf4w"] Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.097265 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.097698 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.155783 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.333773 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.504546 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.504603 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:38:21 crc kubenswrapper[4746]: I0129 16:38:21.543341 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:38:22 crc kubenswrapper[4746]: I0129 16:38:22.106760 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:38:22 crc kubenswrapper[4746]: I0129 16:38:22.106820 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:38:22 crc kubenswrapper[4746]: I0129 16:38:22.298801 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6mf4w" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="registry-server" containerID="cri-o://8975a787d0ac21693652446eb5461dbbf9fbcc8684c01632f9c4c51c024e5e74" gracePeriod=2 Jan 29 16:38:22 crc kubenswrapper[4746]: I0129 16:38:22.352610 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:38:22 crc kubenswrapper[4746]: I0129 16:38:22.650138 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:38:22 crc kubenswrapper[4746]: I0129 16:38:22.650216 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:38:23 crc kubenswrapper[4746]: I0129 16:38:23.145370 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g5lkm" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="registry-server" probeResult="failure" output=< Jan 29 16:38:23 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 29 16:38:23 crc kubenswrapper[4746]: > Jan 29 16:38:23 crc kubenswrapper[4746]: I0129 16:38:23.406106 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxvtt"] Jan 29 16:38:23 crc kubenswrapper[4746]: I0129 16:38:23.406585 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rxvtt" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="registry-server" containerID="cri-o://9b069aab46084d6137dd27a22a504e3941e6e94f268a3cba9cfaee6b8ac61f28" gracePeriod=2 Jan 29 16:38:23 crc kubenswrapper[4746]: I0129 16:38:23.689179 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94jzq" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="registry-server" probeResult="failure" output=< Jan 29 16:38:23 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 29 16:38:23 crc kubenswrapper[4746]: > Jan 29 16:38:24 crc kubenswrapper[4746]: I0129 16:38:24.312335 4746 generic.go:334] "Generic (PLEG): container finished" podID="465b9d5a-3865-4d08-8528-f26c88c00198" containerID="8975a787d0ac21693652446eb5461dbbf9fbcc8684c01632f9c4c51c024e5e74" exitCode=0 Jan 29 16:38:24 crc kubenswrapper[4746]: I0129 16:38:24.312420 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerDied","Data":"8975a787d0ac21693652446eb5461dbbf9fbcc8684c01632f9c4c51c024e5e74"} Jan 29 16:38:24 crc kubenswrapper[4746]: I0129 16:38:24.316327 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"187ccb1bc8cb9fe0656edc934ee5b75e1344cc20ec1b0499ab1a774a533f9c67"} Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.211541 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.271607 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-catalog-content\") pod \"465b9d5a-3865-4d08-8528-f26c88c00198\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.271708 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnhz8\" (UniqueName: \"kubernetes.io/projected/465b9d5a-3865-4d08-8528-f26c88c00198-kube-api-access-bnhz8\") pod \"465b9d5a-3865-4d08-8528-f26c88c00198\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.271729 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-utilities\") pod \"465b9d5a-3865-4d08-8528-f26c88c00198\" (UID: \"465b9d5a-3865-4d08-8528-f26c88c00198\") " Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.272689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-utilities" (OuterVolumeSpecName: "utilities") pod "465b9d5a-3865-4d08-8528-f26c88c00198" (UID: "465b9d5a-3865-4d08-8528-f26c88c00198"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.280136 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465b9d5a-3865-4d08-8528-f26c88c00198-kube-api-access-bnhz8" (OuterVolumeSpecName: "kube-api-access-bnhz8") pod "465b9d5a-3865-4d08-8528-f26c88c00198" (UID: "465b9d5a-3865-4d08-8528-f26c88c00198"). InnerVolumeSpecName "kube-api-access-bnhz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.326483 4746 generic.go:334] "Generic (PLEG): container finished" podID="9d23c8d2-30ea-475c-b864-7d293459d078" containerID="9b069aab46084d6137dd27a22a504e3941e6e94f268a3cba9cfaee6b8ac61f28" exitCode=0 Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.326655 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerDied","Data":"9b069aab46084d6137dd27a22a504e3941e6e94f268a3cba9cfaee6b8ac61f28"} Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.334042 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6mf4w" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.334116 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6mf4w" event={"ID":"465b9d5a-3865-4d08-8528-f26c88c00198","Type":"ContainerDied","Data":"30e26e0fec94f988b6adbd5fc82735f74b0985adf5a6eb8cf7f90f910511143d"} Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.334159 4746 scope.go:117] "RemoveContainer" containerID="8975a787d0ac21693652446eb5461dbbf9fbcc8684c01632f9c4c51c024e5e74" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.339500 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "465b9d5a-3865-4d08-8528-f26c88c00198" (UID: "465b9d5a-3865-4d08-8528-f26c88c00198"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.362449 4746 scope.go:117] "RemoveContainer" containerID="a510c736b07e781c4d13ec0bfd103b9db1e71a17cd25c295bb14902aeb687d2b" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.373495 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnhz8\" (UniqueName: \"kubernetes.io/projected/465b9d5a-3865-4d08-8528-f26c88c00198-kube-api-access-bnhz8\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.373539 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.373557 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/465b9d5a-3865-4d08-8528-f26c88c00198-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.387504 4746 scope.go:117] "RemoveContainer" containerID="83f359fceb43399700fea39d9897f4a71e0a7e75d99799f0b16cde0ba5648445" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.707294 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.718236 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6mf4w"] Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.719264 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6mf4w"] Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.777421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfh72\" (UniqueName: \"kubernetes.io/projected/9d23c8d2-30ea-475c-b864-7d293459d078-kube-api-access-nfh72\") pod \"9d23c8d2-30ea-475c-b864-7d293459d078\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.777603 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-catalog-content\") pod \"9d23c8d2-30ea-475c-b864-7d293459d078\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.777638 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-utilities\") pod \"9d23c8d2-30ea-475c-b864-7d293459d078\" (UID: \"9d23c8d2-30ea-475c-b864-7d293459d078\") " Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.778698 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-utilities" (OuterVolumeSpecName: "utilities") pod "9d23c8d2-30ea-475c-b864-7d293459d078" (UID: "9d23c8d2-30ea-475c-b864-7d293459d078"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.784375 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d23c8d2-30ea-475c-b864-7d293459d078-kube-api-access-nfh72" (OuterVolumeSpecName: "kube-api-access-nfh72") pod "9d23c8d2-30ea-475c-b864-7d293459d078" (UID: "9d23c8d2-30ea-475c-b864-7d293459d078"). InnerVolumeSpecName "kube-api-access-nfh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.811330 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqqwl"] Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.812141 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vqqwl" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="registry-server" containerID="cri-o://9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197" gracePeriod=2 Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.833588 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d23c8d2-30ea-475c-b864-7d293459d078" (UID: "9d23c8d2-30ea-475c-b864-7d293459d078"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.879496 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.879549 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d23c8d2-30ea-475c-b864-7d293459d078-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:25 crc kubenswrapper[4746]: I0129 16:38:25.879564 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfh72\" (UniqueName: \"kubernetes.io/projected/9d23c8d2-30ea-475c-b864-7d293459d078-kube-api-access-nfh72\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.155564 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.183572 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-catalog-content\") pod \"66f7193e-02df-4eb7-a8af-292ff608c945\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.183619 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-utilities\") pod \"66f7193e-02df-4eb7-a8af-292ff608c945\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.183736 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl9w9\" (UniqueName: \"kubernetes.io/projected/66f7193e-02df-4eb7-a8af-292ff608c945-kube-api-access-kl9w9\") pod \"66f7193e-02df-4eb7-a8af-292ff608c945\" (UID: \"66f7193e-02df-4eb7-a8af-292ff608c945\") " Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.185054 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-utilities" (OuterVolumeSpecName: "utilities") pod "66f7193e-02df-4eb7-a8af-292ff608c945" (UID: "66f7193e-02df-4eb7-a8af-292ff608c945"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.190568 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f7193e-02df-4eb7-a8af-292ff608c945-kube-api-access-kl9w9" (OuterVolumeSpecName: "kube-api-access-kl9w9") pod "66f7193e-02df-4eb7-a8af-292ff608c945" (UID: "66f7193e-02df-4eb7-a8af-292ff608c945"). InnerVolumeSpecName "kube-api-access-kl9w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.214597 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66f7193e-02df-4eb7-a8af-292ff608c945" (UID: "66f7193e-02df-4eb7-a8af-292ff608c945"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.285437 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.285496 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f7193e-02df-4eb7-a8af-292ff608c945-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.285511 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl9w9\" (UniqueName: \"kubernetes.io/projected/66f7193e-02df-4eb7-a8af-292ff608c945-kube-api-access-kl9w9\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.345908 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rxvtt" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.345899 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rxvtt" event={"ID":"9d23c8d2-30ea-475c-b864-7d293459d078","Type":"ContainerDied","Data":"c51f67bc8f5eef3b2b07075e2400f296e9c87d78f35e70357fb026f2cc93b890"} Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.346210 4746 scope.go:117] "RemoveContainer" containerID="9b069aab46084d6137dd27a22a504e3941e6e94f268a3cba9cfaee6b8ac61f28" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.353614 4746 generic.go:334] "Generic (PLEG): container finished" podID="66f7193e-02df-4eb7-a8af-292ff608c945" containerID="9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197" exitCode=0 Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.353699 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerDied","Data":"9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197"} Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.353726 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqqwl" event={"ID":"66f7193e-02df-4eb7-a8af-292ff608c945","Type":"ContainerDied","Data":"913755dae6a5789f4c75f653c2a29178c5fd0f065223d3d51000e5300c687de9"} Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.353854 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqqwl" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.389612 4746 scope.go:117] "RemoveContainer" containerID="fd48b3d22d6e96ffcd190b860a5e1054aea57ca51a92be644df92962ccdb9861" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.390976 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rxvtt"] Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.401240 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rxvtt"] Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.410146 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqqwl"] Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.412126 4746 scope.go:117] "RemoveContainer" containerID="72de6574723310ee7780bb7cad8f9308f14c5cadede07bfdf5e6a93b76015600" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.414715 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqqwl"] Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.426325 4746 scope.go:117] "RemoveContainer" containerID="9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.453532 4746 scope.go:117] "RemoveContainer" containerID="b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.454599 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" path="/var/lib/kubelet/pods/465b9d5a-3865-4d08-8528-f26c88c00198/volumes" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.455659 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" path="/var/lib/kubelet/pods/66f7193e-02df-4eb7-a8af-292ff608c945/volumes" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.456764 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" path="/var/lib/kubelet/pods/9d23c8d2-30ea-475c-b864-7d293459d078/volumes" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.470239 4746 scope.go:117] "RemoveContainer" containerID="ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.486320 4746 scope.go:117] "RemoveContainer" containerID="9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197" Jan 29 16:38:26 crc kubenswrapper[4746]: E0129 16:38:26.486927 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197\": container with ID starting with 9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197 not found: ID does not exist" containerID="9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.486971 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197"} err="failed to get container status \"9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197\": rpc error: code = NotFound desc = could not find container \"9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197\": container with ID starting with 9abb5d67d37468cc96225c86d00ecb7d959d60caae0b4856c188a1b4249c6197 not found: ID does not exist" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.487001 4746 scope.go:117] "RemoveContainer" containerID="b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800" Jan 29 16:38:26 crc kubenswrapper[4746]: E0129 16:38:26.487316 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800\": container with ID starting with b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800 not found: ID does not exist" containerID="b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.487347 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800"} err="failed to get container status \"b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800\": rpc error: code = NotFound desc = could not find container \"b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800\": container with ID starting with b530f461f9ee1e6f7442b62c2731c9c59d76b8dd66c8c14aac56bb8272530800 not found: ID does not exist" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.487368 4746 scope.go:117] "RemoveContainer" containerID="ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e" Jan 29 16:38:26 crc kubenswrapper[4746]: E0129 16:38:26.488848 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e\": container with ID starting with ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e not found: ID does not exist" containerID="ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e" Jan 29 16:38:26 crc kubenswrapper[4746]: I0129 16:38:26.488897 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e"} err="failed to get container status \"ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e\": rpc error: code = NotFound desc = could not find container \"ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e\": container with ID starting with ba3d77bfb38c499a81f664ae0aa59c5698fea9765900d12ca7847a518a2eb94e not found: ID does not exist" Jan 29 16:38:29 crc kubenswrapper[4746]: I0129 16:38:29.037266 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:38:29 crc kubenswrapper[4746]: I0129 16:38:29.152311 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:38:32 crc kubenswrapper[4746]: I0129 16:38:32.156536 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:38:32 crc kubenswrapper[4746]: I0129 16:38:32.202045 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:38:32 crc kubenswrapper[4746]: I0129 16:38:32.703958 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:38:32 crc kubenswrapper[4746]: I0129 16:38:32.757431 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:38:33 crc kubenswrapper[4746]: I0129 16:38:33.205289 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94jzq"] Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321275 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321869 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321884 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321896 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="extract-content" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321902 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="extract-content" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321917 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321923 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321934 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="extract-content" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321939 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="extract-content" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321947 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="extract-utilities" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321952 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="extract-utilities" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321962 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321968 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321981 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a" containerName="pruner" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.321988 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a" containerName="pruner" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.321998 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="extract-utilities" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322004 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="extract-utilities" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.322043 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="extract-content" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322052 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="extract-content" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.322059 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="extract-utilities" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322065 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="extract-utilities" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322164 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="465b9d5a-3865-4d08-8528-f26c88c00198" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322175 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a4c7fc-a0a0-4297-8abe-b8f9f6575a8a" containerName="pruner" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322201 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d23c8d2-30ea-475c-b864-7d293459d078" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322210 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f7193e-02df-4eb7-a8af-292ff608c945" containerName="registry-server" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322556 4746 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322757 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.322932 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918" gracePeriod=15 Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323035 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320" gracePeriod=15 Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323071 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82" gracePeriod=15 Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323038 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b" gracePeriod=15 Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323123 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41" gracePeriod=15 Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323557 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323689 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323699 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323706 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323712 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323719 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323724 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323734 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323741 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323749 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323754 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323761 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323766 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323774 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323779 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.323787 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323793 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323888 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323898 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323908 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323914 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323923 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.323930 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.324134 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.365343 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.405169 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94jzq" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="registry-server" containerID="cri-o://8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135" gracePeriod=2 Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.405913 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:34 crc kubenswrapper[4746]: E0129 16:38:34.405951 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-94jzq.188f410fbb73f254 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-94jzq,UID:ddce9ca9-703b-4142-8256-2eb692e9965d,APIVersion:v1,ResourceVersion:28664,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:38:34.40514722 +0000 UTC m=+236.805731864,LastTimestamp:2026-01-29 16:38:34.40514722 +0000 UTC m=+236.805731864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.406253 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.406488 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.471975 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472048 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472091 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472178 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472230 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472251 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472276 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.472301 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573470 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573564 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573599 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573648 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573635 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573681 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573764 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573767 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573826 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573677 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573797 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573703 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.573734 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: I0129 16:38:34.667133 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:38:34 crc kubenswrapper[4746]: W0129 16:38:34.701664 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-360e7ac9769b1087045700773cc52ba279ec102d1da1fe570a3cea65be94c909 WatchSource:0}: Error finding container 360e7ac9769b1087045700773cc52ba279ec102d1da1fe570a3cea65be94c909: Status 404 returned error can't find the container with id 360e7ac9769b1087045700773cc52ba279ec102d1da1fe570a3cea65be94c909 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.295444 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.297278 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.297729 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.411698 4746 generic.go:334] "Generic (PLEG): container finished" podID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerID="8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135" exitCode=0 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.411801 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerDied","Data":"8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135"} Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.411848 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94jzq" event={"ID":"ddce9ca9-703b-4142-8256-2eb692e9965d","Type":"ContainerDied","Data":"8c4a2c0c116b9b3b0beec14fca00996d460a5915d90a4d377958f0524b49c21d"} Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.411874 4746 scope.go:117] "RemoveContainer" containerID="8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.412054 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94jzq" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.413075 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.413445 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.415519 4746 generic.go:334] "Generic (PLEG): container finished" podID="e349656c-1d27-4785-9d19-ae7ee47808f9" containerID="367c324a91b4322d7db21a9c063f1198237756ad51670fcfc820a0daf7adf1a6" exitCode=0 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.415603 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e349656c-1d27-4785-9d19-ae7ee47808f9","Type":"ContainerDied","Data":"367c324a91b4322d7db21a9c063f1198237756ad51670fcfc820a0daf7adf1a6"} Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.416526 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.416921 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.418017 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.418146 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e00d15b7141630748f17db544e855bf10e0e820d774afcd9837363b6d0d29184"} Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.418888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"360e7ac9769b1087045700773cc52ba279ec102d1da1fe570a3cea65be94c909"} Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.418996 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.419424 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.419879 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.420966 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.428988 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.429809 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320" exitCode=0 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.429855 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b" exitCode=0 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.429872 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82" exitCode=0 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.429887 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41" exitCode=2 Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.430971 4746 scope.go:117] "RemoveContainer" containerID="9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.485683 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwg9x\" (UniqueName: \"kubernetes.io/projected/ddce9ca9-703b-4142-8256-2eb692e9965d-kube-api-access-qwg9x\") pod \"ddce9ca9-703b-4142-8256-2eb692e9965d\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.485788 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-catalog-content\") pod \"ddce9ca9-703b-4142-8256-2eb692e9965d\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.485921 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-utilities\") pod \"ddce9ca9-703b-4142-8256-2eb692e9965d\" (UID: \"ddce9ca9-703b-4142-8256-2eb692e9965d\") " Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.488096 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-utilities" (OuterVolumeSpecName: "utilities") pod "ddce9ca9-703b-4142-8256-2eb692e9965d" (UID: "ddce9ca9-703b-4142-8256-2eb692e9965d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.489290 4746 scope.go:117] "RemoveContainer" containerID="672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.494541 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddce9ca9-703b-4142-8256-2eb692e9965d-kube-api-access-qwg9x" (OuterVolumeSpecName: "kube-api-access-qwg9x") pod "ddce9ca9-703b-4142-8256-2eb692e9965d" (UID: "ddce9ca9-703b-4142-8256-2eb692e9965d"). InnerVolumeSpecName "kube-api-access-qwg9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.523984 4746 scope.go:117] "RemoveContainer" containerID="8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135" Jan 29 16:38:35 crc kubenswrapper[4746]: E0129 16:38:35.524607 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135\": container with ID starting with 8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135 not found: ID does not exist" containerID="8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.524671 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135"} err="failed to get container status \"8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135\": rpc error: code = NotFound desc = could not find container \"8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135\": container with ID starting with 8f032394feee9d96869de47396dff6921d4b0d91dca3cfae6e210f66baecd135 not found: ID does not exist" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.524717 4746 scope.go:117] "RemoveContainer" containerID="9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d" Jan 29 16:38:35 crc kubenswrapper[4746]: E0129 16:38:35.525274 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d\": container with ID starting with 9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d not found: ID does not exist" containerID="9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.525368 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d"} err="failed to get container status \"9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d\": rpc error: code = NotFound desc = could not find container \"9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d\": container with ID starting with 9ad97283265a237839507cb3e8a716a96057c5ac5cbe1f1ffa5a59e86ecb2a2d not found: ID does not exist" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.525626 4746 scope.go:117] "RemoveContainer" containerID="672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333" Jan 29 16:38:35 crc kubenswrapper[4746]: E0129 16:38:35.526015 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333\": container with ID starting with 672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333 not found: ID does not exist" containerID="672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.526063 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333"} err="failed to get container status \"672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333\": rpc error: code = NotFound desc = could not find container \"672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333\": container with ID starting with 672a0de84d331ae722cb3042fe9c538158f08ab7d96dd53cd16b9ffe9642f333 not found: ID does not exist" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.526098 4746 scope.go:117] "RemoveContainer" containerID="8b7e663e502017191b7f9b4aad44529aef0d71e83aec7b83d792c94711ade28c" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.588080 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.588128 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwg9x\" (UniqueName: \"kubernetes.io/projected/ddce9ca9-703b-4142-8256-2eb692e9965d-kube-api-access-qwg9x\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.640716 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddce9ca9-703b-4142-8256-2eb692e9965d" (UID: "ddce9ca9-703b-4142-8256-2eb692e9965d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.689898 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddce9ca9-703b-4142-8256-2eb692e9965d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.728953 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.729737 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:35 crc kubenswrapper[4746]: I0129 16:38:35.730280 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.449116 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.806933 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.807981 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.808489 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.808738 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.914733 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-var-lock\") pod \"e349656c-1d27-4785-9d19-ae7ee47808f9\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.914818 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e349656c-1d27-4785-9d19-ae7ee47808f9-kube-api-access\") pod \"e349656c-1d27-4785-9d19-ae7ee47808f9\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.914923 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-kubelet-dir\") pod \"e349656c-1d27-4785-9d19-ae7ee47808f9\" (UID: \"e349656c-1d27-4785-9d19-ae7ee47808f9\") " Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.915026 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e349656c-1d27-4785-9d19-ae7ee47808f9" (UID: "e349656c-1d27-4785-9d19-ae7ee47808f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.915109 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.915549 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-var-lock" (OuterVolumeSpecName: "var-lock") pod "e349656c-1d27-4785-9d19-ae7ee47808f9" (UID: "e349656c-1d27-4785-9d19-ae7ee47808f9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:38:36 crc kubenswrapper[4746]: I0129 16:38:36.921959 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e349656c-1d27-4785-9d19-ae7ee47808f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e349656c-1d27-4785-9d19-ae7ee47808f9" (UID: "e349656c-1d27-4785-9d19-ae7ee47808f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.016391 4746 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e349656c-1d27-4785-9d19-ae7ee47808f9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.016439 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e349656c-1d27-4785-9d19-ae7ee47808f9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.463015 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.463686 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918" exitCode=0 Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.465090 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e349656c-1d27-4785-9d19-ae7ee47808f9","Type":"ContainerDied","Data":"83d974bb7874328c4b69c4ffa5a9c960c2e6d454ac650e2748a7c300f7c5ca72"} Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.465128 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83d974bb7874328c4b69c4ffa5a9c960c2e6d454ac650e2748a7c300f7c5ca72" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.465250 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.477869 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.478052 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.478224 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.916885 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.918207 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.918820 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.919181 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.919517 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:37 crc kubenswrapper[4746]: I0129 16:38:37.919807 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: E0129 16:38:38.014518 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.22:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-94jzq.188f410fbb73f254 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-94jzq,UID:ddce9ca9-703b-4142-8256-2eb692e9965d,APIVersion:v1,ResourceVersion:28664,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:38:34.40514722 +0000 UTC m=+236.805731864,LastTimestamp:2026-01-29 16:38:34.40514722 +0000 UTC m=+236.805731864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.027013 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.027054 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.027070 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.027403 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.027440 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.027447 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.128585 4746 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.128629 4746 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.128642 4746 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.456162 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.456903 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.459114 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.460045 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.463465 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.476284 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.477080 4746 scope.go:117] "RemoveContainer" containerID="85a5a7de1b3870cf84a6d3f132242163d583610823f7f4cfe3a4157c197c2320" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.477177 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.478372 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.479035 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.479296 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.479476 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.483287 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.483845 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.485430 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.485958 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.495933 4746 scope.go:117] "RemoveContainer" containerID="f24389a0828f721378ce8bc7a061559fb49232bd541550864d63ff50b9b9456b" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.510298 4746 scope.go:117] "RemoveContainer" containerID="8a992784e639c8dc9e888cb6c5c2d66a89752ad2f51d51075a0fe419a4d77a82" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.521803 4746 scope.go:117] "RemoveContainer" containerID="439d177d32af501bd8fb0e55fe89cd5bd60d7d7b7ac06b6033857fde56728f41" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.535050 4746 scope.go:117] "RemoveContainer" containerID="533b7ec4c240fd3e4210316c173104e2e6e3444608bd3c602b5249387d213918" Jan 29 16:38:38 crc kubenswrapper[4746]: I0129 16:38:38.557386 4746 scope.go:117] "RemoveContainer" containerID="c35090261db3f243628b6756864affd9ed10a7bfeb587ce46cb7092ebf8b6051" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.227002 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.227403 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.227658 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.228590 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.228870 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:39 crc kubenswrapper[4746]: I0129 16:38:39.228905 4746 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.229304 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="200ms" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.430265 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="400ms" Jan 29 16:38:39 crc kubenswrapper[4746]: E0129 16:38:39.832224 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="800ms" Jan 29 16:38:40 crc kubenswrapper[4746]: E0129 16:38:40.635030 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="1.6s" Jan 29 16:38:42 crc kubenswrapper[4746]: E0129 16:38:42.238898 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="3.2s" Jan 29 16:38:45 crc kubenswrapper[4746]: E0129 16:38:45.440040 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.22:6443: connect: connection refused" interval="6.4s" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.445648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.446611 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.447367 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.447869 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.464572 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.464607 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:45 crc kubenswrapper[4746]: E0129 16:38:45.465057 4746 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.466079 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:45 crc kubenswrapper[4746]: W0129 16:38:45.495311 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-4d796090056112a5d051bb9eb5d19d130f6e5e9c84eb95078e1301749159e3cb WatchSource:0}: Error finding container 4d796090056112a5d051bb9eb5d19d130f6e5e9c84eb95078e1301749159e3cb: Status 404 returned error can't find the container with id 4d796090056112a5d051bb9eb5d19d130f6e5e9c84eb95078e1301749159e3cb Jan 29 16:38:45 crc kubenswrapper[4746]: I0129 16:38:45.514142 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4d796090056112a5d051bb9eb5d19d130f6e5e9c84eb95078e1301749159e3cb"} Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.520736 4746 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cac67dc2af20d7d4a90c9ce88c569c8ce4e2e474fc8f145c89cebeb2cd9d51ac" exitCode=0 Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.520861 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cac67dc2af20d7d4a90c9ce88c569c8ce4e2e474fc8f145c89cebeb2cd9d51ac"} Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.521051 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.521123 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:46 crc kubenswrapper[4746]: E0129 16:38:46.521370 4746 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.521414 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.522322 4746 status_manager.go:851] "Failed to get status for pod" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:46 crc kubenswrapper[4746]: I0129 16:38:46.522621 4746 status_manager.go:851] "Failed to get status for pod" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" pod="openshift-marketplace/redhat-operators-94jzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-94jzq\": dial tcp 38.102.83.22:6443: connect: connection refused" Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.531345 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fb4c2d5bad8b28a83d6d21513df0259d86b4b17d84f3639d018c1b4256de3b70"} Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.531411 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"123baa1cbdab75e2f62b454ba1b6609acb3c9bdf34bbc41554b22830e03886c3"} Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.531435 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4a50b61be1143d2ec8226f8e2db4b568b3c504805df4a61b91abeebca4f8daaf"} Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.531449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"055a540228f44bd47f51133ecbdbe2a5e252acf52e4107420d9ca932bfececca"} Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.535457 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.535503 4746 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a" exitCode=1 Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.535531 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a"} Jan 29 16:38:47 crc kubenswrapper[4746]: I0129 16:38:47.536049 4746 scope.go:117] "RemoveContainer" containerID="3a2ccd0995873a9b2c167e30b1840299dab783f96ab191ec770229304b63bd3a" Jan 29 16:38:48 crc kubenswrapper[4746]: I0129 16:38:48.559809 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:38:48 crc kubenswrapper[4746]: I0129 16:38:48.560307 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ecdeed5dc1cbbfa3107e9f7d48a5ad222232ad3a05663ddfc7be10a1d9606f84"} Jan 29 16:38:48 crc kubenswrapper[4746]: I0129 16:38:48.562954 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"426d9849f13f54999f3fd71b0d1927a7946a0d0e74a71617d6d3016ad41c049f"} Jan 29 16:38:48 crc kubenswrapper[4746]: I0129 16:38:48.563275 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:48 crc kubenswrapper[4746]: I0129 16:38:48.563328 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:48 crc kubenswrapper[4746]: I0129 16:38:48.563423 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:49 crc kubenswrapper[4746]: I0129 16:38:49.782922 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:38:50 crc kubenswrapper[4746]: I0129 16:38:50.466806 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:50 crc kubenswrapper[4746]: I0129 16:38:50.466892 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:50 crc kubenswrapper[4746]: I0129 16:38:50.474995 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.495197 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.495512 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.495785 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.573250 4746 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.593641 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.593930 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.597282 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:38:53 crc kubenswrapper[4746]: I0129 16:38:53.600107 4746 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="cf58e0c4-8012-44ae-b40c-1f806a4a8578" Jan 29 16:38:54 crc kubenswrapper[4746]: I0129 16:38:54.598055 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:54 crc kubenswrapper[4746]: I0129 16:38:54.598467 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0799c787-c274-4e25-a72c-0b56d6c03fdd" Jan 29 16:38:58 crc kubenswrapper[4746]: I0129 16:38:58.462686 4746 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="cf58e0c4-8012-44ae-b40c-1f806a4a8578" Jan 29 16:39:01 crc kubenswrapper[4746]: I0129 16:39:01.589413 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:39:03 crc kubenswrapper[4746]: I0129 16:39:03.016022 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:39:03 crc kubenswrapper[4746]: I0129 16:39:03.431815 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:39:03 crc kubenswrapper[4746]: I0129 16:39:03.500441 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:39:03 crc kubenswrapper[4746]: I0129 16:39:03.509685 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:39:03 crc kubenswrapper[4746]: I0129 16:39:03.705900 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:39:03 crc kubenswrapper[4746]: I0129 16:39:03.955703 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:39:04 crc kubenswrapper[4746]: I0129 16:39:04.579669 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:39:04 crc kubenswrapper[4746]: I0129 16:39:04.730658 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:39:05 crc kubenswrapper[4746]: I0129 16:39:05.016470 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:39:05 crc kubenswrapper[4746]: I0129 16:39:05.155751 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:39:05 crc kubenswrapper[4746]: I0129 16:39:05.373919 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:39:05 crc kubenswrapper[4746]: I0129 16:39:05.437710 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:39:05 crc kubenswrapper[4746]: I0129 16:39:05.912241 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:39:05 crc kubenswrapper[4746]: I0129 16:39:05.954918 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:39:06 crc kubenswrapper[4746]: I0129 16:39:06.740433 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:39:06 crc kubenswrapper[4746]: I0129 16:39:06.793890 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:39:06 crc kubenswrapper[4746]: I0129 16:39:06.893384 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:39:07 crc kubenswrapper[4746]: I0129 16:39:07.094987 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:39:07 crc kubenswrapper[4746]: I0129 16:39:07.521339 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:39:07 crc kubenswrapper[4746]: I0129 16:39:07.540805 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:39:07 crc kubenswrapper[4746]: I0129 16:39:07.591134 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:39:07 crc kubenswrapper[4746]: I0129 16:39:07.640965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.071915 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.134910 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.161629 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.172681 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.206579 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.223989 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.321280 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.343161 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.383309 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.402286 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.476580 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.556499 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.581267 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.600103 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.664808 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.839810 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:39:08 crc kubenswrapper[4746]: I0129 16:39:08.942094 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.025520 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.048094 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.126566 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.136246 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.150670 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.157395 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.203013 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.213086 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.308811 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.467039 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.684371 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.717103 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.827326 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.911788 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.932577 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:39:09 crc kubenswrapper[4746]: I0129 16:39:09.996137 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.001082 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.015973 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.032125 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.037103 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.051503 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.074911 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.225323 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.226384 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.238912 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.313599 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.355179 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.371705 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.496887 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.651932 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.685332 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.727505 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.741020 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.784290 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.835081 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.880683 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:39:10 crc kubenswrapper[4746]: I0129 16:39:10.974952 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.170134 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.191820 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.203668 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.212716 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.220470 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.247710 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.286738 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.368825 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.536741 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.544411 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.558370 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.564467 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.571845 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.671590 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:39:11 crc kubenswrapper[4746]: I0129 16:39:11.733359 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.779646 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.848367 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.869564 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.871148 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.887794 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.940858 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:11.994933 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.372298 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.401774 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.449032 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.520260 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.627332 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.652366 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.667706 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.686023 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.691786 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.694349 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.724060 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.740232 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.798676 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.832792 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.880816 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.891867 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.900260 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.965952 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:39:12 crc kubenswrapper[4746]: I0129 16:39:12.992757 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.044626 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.130066 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.244504 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.261663 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.315544 4746 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.326221 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.373266 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.397966 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.410825 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.413378 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.539762 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.566839 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.672972 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.735728 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.806953 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.920423 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.947253 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:39:13 crc kubenswrapper[4746]: I0129 16:39:13.949070 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.040302 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.127114 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.142585 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.259740 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.302152 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.408533 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.489585 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.521650 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.590653 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.874710 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.904576 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.964980 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.978568 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:39:14 crc kubenswrapper[4746]: I0129 16:39:14.985954 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.016761 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.160687 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.168343 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.181865 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.286879 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.405122 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.407183 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.442651 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.447666 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.492708 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.502424 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.625126 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.633128 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.702533 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.707501 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.708851 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.755645 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.827176 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.871714 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.897508 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.927777 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.930133 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.978984 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.979394 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.993456 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:39:15 crc kubenswrapper[4746]: I0129 16:39:15.997849 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.045624 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.123910 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.126180 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.126384 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.283387 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.293412 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.339768 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.363152 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.366159 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.489059 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.499318 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.506089 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.539675 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.609735 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.649355 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.675062 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.686255 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.710133 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.766574 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.804591 4746 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.809812 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.809785594 podStartE2EDuration="42.809785594s" podCreationTimestamp="2026-01-29 16:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:38:53.430995607 +0000 UTC m=+255.831580251" watchObservedRunningTime="2026-01-29 16:39:16.809785594 +0000 UTC m=+279.210370248" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.812341 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-operators-94jzq"] Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.812408 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.819064 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.838229 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.838206148 podStartE2EDuration="23.838206148s" podCreationTimestamp="2026-01-29 16:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:39:16.838103725 +0000 UTC m=+279.238688379" watchObservedRunningTime="2026-01-29 16:39:16.838206148 +0000 UTC m=+279.238790792" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.926049 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.927749 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:39:16 crc kubenswrapper[4746]: I0129 16:39:16.976800 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.039563 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.090695 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.091301 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.133286 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.183255 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.204904 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.221244 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.296295 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.330281 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.349579 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.408228 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.436316 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.552368 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.592372 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.599902 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.608998 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.612819 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.648277 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.716070 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.725467 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.742094 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.758738 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.759144 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.781039 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.789937 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.830614 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.938775 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:39:17 crc kubenswrapper[4746]: I0129 16:39:17.964561 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.006343 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.036291 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.059834 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.227576 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.228259 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.273762 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.454813 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" path="/var/lib/kubelet/pods/ddce9ca9-703b-4142-8256-2eb692e9965d/volumes" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.592501 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.754260 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.880694 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:39:18 crc kubenswrapper[4746]: I0129 16:39:18.983241 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.119480 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.149128 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.521737 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.564116 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.573889 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.668379 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:39:19 crc kubenswrapper[4746]: I0129 16:39:19.869494 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.054598 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.132456 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.336661 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.385424 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.406896 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.715367 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:39:20 crc kubenswrapper[4746]: I0129 16:39:20.989529 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:39:21 crc kubenswrapper[4746]: I0129 16:39:21.582399 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:39:21 crc kubenswrapper[4746]: I0129 16:39:21.787350 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:39:21 crc kubenswrapper[4746]: I0129 16:39:21.876325 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:39:27 crc kubenswrapper[4746]: I0129 16:39:27.383012 4746 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:39:27 crc kubenswrapper[4746]: I0129 16:39:27.384124 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e00d15b7141630748f17db544e855bf10e0e820d774afcd9837363b6d0d29184" gracePeriod=5 Jan 29 16:39:32 crc kubenswrapper[4746]: I0129 16:39:32.836479 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:39:32 crc kubenswrapper[4746]: I0129 16:39:32.837394 4746 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e00d15b7141630748f17db544e855bf10e0e820d774afcd9837363b6d0d29184" exitCode=137 Jan 29 16:39:32 crc kubenswrapper[4746]: I0129 16:39:32.964880 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:39:32 crc kubenswrapper[4746]: I0129 16:39:32.964995 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.081103 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.081696 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.081294 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.081742 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.082217 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.082605 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.082822 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.082690 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.082987 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.083754 4746 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.083935 4746 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.084127 4746 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.084318 4746 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.094044 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.185347 4746 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.845453 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.845542 4746 scope.go:117] "RemoveContainer" containerID="e00d15b7141630748f17db544e855bf10e0e820d774afcd9837363b6d0d29184" Jan 29 16:39:33 crc kubenswrapper[4746]: I0129 16:39:33.845639 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:39:34 crc kubenswrapper[4746]: I0129 16:39:34.457318 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 16:39:34 crc kubenswrapper[4746]: I0129 16:39:34.457565 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 16:39:34 crc kubenswrapper[4746]: I0129 16:39:34.468895 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:39:34 crc kubenswrapper[4746]: I0129 16:39:34.468964 4746 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="62d9f2d6-1431-4961-890b-0704801f5055" Jan 29 16:39:34 crc kubenswrapper[4746]: I0129 16:39:34.472586 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:39:34 crc kubenswrapper[4746]: I0129 16:39:34.472633 4746 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="62d9f2d6-1431-4961-890b-0704801f5055" Jan 29 16:39:38 crc kubenswrapper[4746]: I0129 16:39:38.201787 4746 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 16:39:38 crc kubenswrapper[4746]: I0129 16:39:38.873347 4746 generic.go:334] "Generic (PLEG): container finished" podID="608c383e-45e1-43dd-b8ad-9a7499953754" containerID="a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60" exitCode=0 Jan 29 16:39:38 crc kubenswrapper[4746]: I0129 16:39:38.873398 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" event={"ID":"608c383e-45e1-43dd-b8ad-9a7499953754","Type":"ContainerDied","Data":"a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60"} Jan 29 16:39:38 crc kubenswrapper[4746]: I0129 16:39:38.873890 4746 scope.go:117] "RemoveContainer" containerID="a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60" Jan 29 16:39:39 crc kubenswrapper[4746]: I0129 16:39:39.880782 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" event={"ID":"608c383e-45e1-43dd-b8ad-9a7499953754","Type":"ContainerStarted","Data":"59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355"} Jan 29 16:39:39 crc kubenswrapper[4746]: I0129 16:39:39.881566 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:39:39 crc kubenswrapper[4746]: I0129 16:39:39.884555 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.216392 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8q4kh"] Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.217526 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" podUID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" containerName="controller-manager" containerID="cri-o://131ae3ffb2176aa8c2e578ebde65292b45151fbd358beb3156fe943986eae9e2" gracePeriod=30 Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.319536 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6"] Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.319781 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" podUID="b08d2e95-cdc5-4934-94ec-2cdb56479e29" containerName="route-controller-manager" containerID="cri-o://cd0fb777f8c73716e27dd5e588e1043c3da2b06bb507efc76b0ff490689b828b" gracePeriod=30 Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.926380 4746 generic.go:334] "Generic (PLEG): container finished" podID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" containerID="131ae3ffb2176aa8c2e578ebde65292b45151fbd358beb3156fe943986eae9e2" exitCode=0 Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.926455 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" event={"ID":"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb","Type":"ContainerDied","Data":"131ae3ffb2176aa8c2e578ebde65292b45151fbd358beb3156fe943986eae9e2"} Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.928644 4746 generic.go:334] "Generic (PLEG): container finished" podID="b08d2e95-cdc5-4934-94ec-2cdb56479e29" containerID="cd0fb777f8c73716e27dd5e588e1043c3da2b06bb507efc76b0ff490689b828b" exitCode=0 Jan 29 16:39:47 crc kubenswrapper[4746]: I0129 16:39:47.928682 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" event={"ID":"b08d2e95-cdc5-4934-94ec-2cdb56479e29","Type":"ContainerDied","Data":"cd0fb777f8c73716e27dd5e588e1043c3da2b06bb507efc76b0ff490689b828b"} Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.100579 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.168223 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.186966 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-config\") pod \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.187044 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmhmq\" (UniqueName: \"kubernetes.io/projected/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-kube-api-access-bmhmq\") pod \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.187105 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-client-ca\") pod \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.187133 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-serving-cert\") pod \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.187169 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-proxy-ca-bundles\") pod \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\" (UID: \"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.188202 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" (UID: "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.188694 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-config" (OuterVolumeSpecName: "config") pod "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" (UID: "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.189866 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-client-ca" (OuterVolumeSpecName: "client-ca") pod "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" (UID: "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.195657 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" (UID: "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.196231 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-kube-api-access-bmhmq" (OuterVolumeSpecName: "kube-api-access-bmhmq") pod "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" (UID: "55b1c15f-46c7-4712-8ed1-2e7d9a77eadb"). InnerVolumeSpecName "kube-api-access-bmhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288507 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-config\") pod \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288565 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08d2e95-cdc5-4934-94ec-2cdb56479e29-serving-cert\") pod \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288590 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-client-ca\") pod \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288639 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fslng\" (UniqueName: \"kubernetes.io/projected/b08d2e95-cdc5-4934-94ec-2cdb56479e29-kube-api-access-fslng\") pod \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\" (UID: \"b08d2e95-cdc5-4934-94ec-2cdb56479e29\") " Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288897 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288909 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmhmq\" (UniqueName: \"kubernetes.io/projected/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-kube-api-access-bmhmq\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288918 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288927 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.288935 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.289576 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-client-ca" (OuterVolumeSpecName: "client-ca") pod "b08d2e95-cdc5-4934-94ec-2cdb56479e29" (UID: "b08d2e95-cdc5-4934-94ec-2cdb56479e29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.289676 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-config" (OuterVolumeSpecName: "config") pod "b08d2e95-cdc5-4934-94ec-2cdb56479e29" (UID: "b08d2e95-cdc5-4934-94ec-2cdb56479e29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.292378 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b08d2e95-cdc5-4934-94ec-2cdb56479e29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b08d2e95-cdc5-4934-94ec-2cdb56479e29" (UID: "b08d2e95-cdc5-4934-94ec-2cdb56479e29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.292781 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08d2e95-cdc5-4934-94ec-2cdb56479e29-kube-api-access-fslng" (OuterVolumeSpecName: "kube-api-access-fslng") pod "b08d2e95-cdc5-4934-94ec-2cdb56479e29" (UID: "b08d2e95-cdc5-4934-94ec-2cdb56479e29"). InnerVolumeSpecName "kube-api-access-fslng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.390333 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.390375 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b08d2e95-cdc5-4934-94ec-2cdb56479e29-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.390384 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b08d2e95-cdc5-4934-94ec-2cdb56479e29-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.390393 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fslng\" (UniqueName: \"kubernetes.io/projected/b08d2e95-cdc5-4934-94ec-2cdb56479e29-kube-api-access-fslng\") on node \"crc\" DevicePath \"\"" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935075 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5967c84899-5c587"] Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935438 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="extract-utilities" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935455 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="extract-utilities" Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935469 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935605 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935620 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08d2e95-cdc5-4934-94ec-2cdb56479e29" containerName="route-controller-manager" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935629 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08d2e95-cdc5-4934-94ec-2cdb56479e29" containerName="route-controller-manager" Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935645 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="registry-server" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935653 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="registry-server" Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935663 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="extract-content" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935671 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="extract-content" Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935679 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" containerName="controller-manager" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935690 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" containerName="controller-manager" Jan 29 16:39:48 crc kubenswrapper[4746]: E0129 16:39:48.935700 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" containerName="installer" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935708 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" containerName="installer" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935841 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddce9ca9-703b-4142-8256-2eb692e9965d" containerName="registry-server" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935856 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e349656c-1d27-4785-9d19-ae7ee47808f9" containerName="installer" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935868 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" containerName="controller-manager" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935879 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08d2e95-cdc5-4934-94ec-2cdb56479e29" containerName="route-controller-manager" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.935890 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.936414 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.937588 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.937640 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6" event={"ID":"b08d2e95-cdc5-4934-94ec-2cdb56479e29","Type":"ContainerDied","Data":"2ca45fd2b09bceda4660c848c7e99b26786a1f76df1f65332c375df175c23a6f"} Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.937745 4746 scope.go:117] "RemoveContainer" containerID="cd0fb777f8c73716e27dd5e588e1043c3da2b06bb507efc76b0ff490689b828b" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.939257 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" event={"ID":"55b1c15f-46c7-4712-8ed1-2e7d9a77eadb","Type":"ContainerDied","Data":"b34ad19bd8dc265013a739a142f9f69fa312f6ff6ff469db1bacc5b7a1976bcf"} Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.939289 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8q4kh" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.942840 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5"] Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.943596 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.950534 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.950753 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.950888 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.951156 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.951509 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.951583 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.957707 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5"] Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.962530 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5967c84899-5c587"] Jan 29 16:39:48 crc kubenswrapper[4746]: I0129 16:39:48.963815 4746 scope.go:117] "RemoveContainer" containerID="131ae3ffb2176aa8c2e578ebde65292b45151fbd358beb3156fe943986eae9e2" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.008238 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6"] Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.030958 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mwpz6"] Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.043033 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8q4kh"] Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.047984 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8q4kh"] Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100631 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb74b681-d755-4425-949e-ab8e40c9a10b-serving-cert\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100689 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frxs9\" (UniqueName: \"kubernetes.io/projected/f541cf51-6a4c-4ac0-970b-f93b7400a711-kube-api-access-frxs9\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100719 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g7ns\" (UniqueName: \"kubernetes.io/projected/eb74b681-d755-4425-949e-ab8e40c9a10b-kube-api-access-4g7ns\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100818 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-config\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100849 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-client-ca\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100866 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f541cf51-6a4c-4ac0-970b-f93b7400a711-serving-cert\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100887 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-proxy-ca-bundles\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100911 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-client-ca\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.100937 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-config\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.201827 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb74b681-d755-4425-949e-ab8e40c9a10b-serving-cert\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202008 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frxs9\" (UniqueName: \"kubernetes.io/projected/f541cf51-6a4c-4ac0-970b-f93b7400a711-kube-api-access-frxs9\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202057 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g7ns\" (UniqueName: \"kubernetes.io/projected/eb74b681-d755-4425-949e-ab8e40c9a10b-kube-api-access-4g7ns\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202101 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-config\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202141 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-client-ca\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202165 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f541cf51-6a4c-4ac0-970b-f93b7400a711-serving-cert\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202210 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-proxy-ca-bundles\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202242 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-config\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.202264 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-client-ca\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.203428 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-proxy-ca-bundles\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.203471 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-client-ca\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.203855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-client-ca\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.204300 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-config\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.205938 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-config\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.207488 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb74b681-d755-4425-949e-ab8e40c9a10b-serving-cert\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.209639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f541cf51-6a4c-4ac0-970b-f93b7400a711-serving-cert\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.227398 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frxs9\" (UniqueName: \"kubernetes.io/projected/f541cf51-6a4c-4ac0-970b-f93b7400a711-kube-api-access-frxs9\") pod \"route-controller-manager-bcfc6b96c-fcjr5\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.229250 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g7ns\" (UniqueName: \"kubernetes.io/projected/eb74b681-d755-4425-949e-ab8e40c9a10b-kube-api-access-4g7ns\") pod \"controller-manager-5967c84899-5c587\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.262110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.287221 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.544610 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5967c84899-5c587"] Jan 29 16:39:49 crc kubenswrapper[4746]: W0129 16:39:49.552272 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb74b681_d755_4425_949e_ab8e40c9a10b.slice/crio-9a29a6f2bb2c281091f6aabb97d0a69e704de316bc96e1efffc6e695eeb7997c WatchSource:0}: Error finding container 9a29a6f2bb2c281091f6aabb97d0a69e704de316bc96e1efffc6e695eeb7997c: Status 404 returned error can't find the container with id 9a29a6f2bb2c281091f6aabb97d0a69e704de316bc96e1efffc6e695eeb7997c Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.589144 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5"] Jan 29 16:39:49 crc kubenswrapper[4746]: W0129 16:39:49.596348 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf541cf51_6a4c_4ac0_970b_f93b7400a711.slice/crio-1f871133bcb45d17872159b0f886d23320b77197c1742d610f1cf38ae18c935a WatchSource:0}: Error finding container 1f871133bcb45d17872159b0f886d23320b77197c1742d610f1cf38ae18c935a: Status 404 returned error can't find the container with id 1f871133bcb45d17872159b0f886d23320b77197c1742d610f1cf38ae18c935a Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.857438 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9678f"] Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.945971 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" event={"ID":"f541cf51-6a4c-4ac0-970b-f93b7400a711","Type":"ContainerStarted","Data":"d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755"} Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.946025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" event={"ID":"f541cf51-6a4c-4ac0-970b-f93b7400a711","Type":"ContainerStarted","Data":"1f871133bcb45d17872159b0f886d23320b77197c1742d610f1cf38ae18c935a"} Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.946351 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.949520 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" event={"ID":"eb74b681-d755-4425-949e-ab8e40c9a10b","Type":"ContainerStarted","Data":"4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2"} Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.949575 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" event={"ID":"eb74b681-d755-4425-949e-ab8e40c9a10b","Type":"ContainerStarted","Data":"9a29a6f2bb2c281091f6aabb97d0a69e704de316bc96e1efffc6e695eeb7997c"} Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.949739 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.956150 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.966930 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" podStartSLOduration=2.966900232 podStartE2EDuration="2.966900232s" podCreationTimestamp="2026-01-29 16:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:39:49.964444423 +0000 UTC m=+312.365029077" watchObservedRunningTime="2026-01-29 16:39:49.966900232 +0000 UTC m=+312.367484876" Jan 29 16:39:49 crc kubenswrapper[4746]: I0129 16:39:49.984543 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" podStartSLOduration=2.984520874 podStartE2EDuration="2.984520874s" podCreationTimestamp="2026-01-29 16:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:39:49.984291458 +0000 UTC m=+312.384876102" watchObservedRunningTime="2026-01-29 16:39:49.984520874 +0000 UTC m=+312.385105518" Jan 29 16:39:50 crc kubenswrapper[4746]: I0129 16:39:50.233610 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:39:50 crc kubenswrapper[4746]: I0129 16:39:50.454617 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b1c15f-46c7-4712-8ed1-2e7d9a77eadb" path="/var/lib/kubelet/pods/55b1c15f-46c7-4712-8ed1-2e7d9a77eadb/volumes" Jan 29 16:39:50 crc kubenswrapper[4746]: I0129 16:39:50.455330 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08d2e95-cdc5-4934-94ec-2cdb56479e29" path="/var/lib/kubelet/pods/b08d2e95-cdc5-4934-94ec-2cdb56479e29/volumes" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.173753 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5"] Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.174905 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" podUID="f541cf51-6a4c-4ac0-970b-f93b7400a711" containerName="route-controller-manager" containerID="cri-o://d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755" gracePeriod=30 Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.624467 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.765753 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frxs9\" (UniqueName: \"kubernetes.io/projected/f541cf51-6a4c-4ac0-970b-f93b7400a711-kube-api-access-frxs9\") pod \"f541cf51-6a4c-4ac0-970b-f93b7400a711\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.765892 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-config\") pod \"f541cf51-6a4c-4ac0-970b-f93b7400a711\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.765928 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-client-ca\") pod \"f541cf51-6a4c-4ac0-970b-f93b7400a711\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.766825 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-client-ca" (OuterVolumeSpecName: "client-ca") pod "f541cf51-6a4c-4ac0-970b-f93b7400a711" (UID: "f541cf51-6a4c-4ac0-970b-f93b7400a711"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.766948 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-config" (OuterVolumeSpecName: "config") pod "f541cf51-6a4c-4ac0-970b-f93b7400a711" (UID: "f541cf51-6a4c-4ac0-970b-f93b7400a711"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.767165 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f541cf51-6a4c-4ac0-970b-f93b7400a711-serving-cert\") pod \"f541cf51-6a4c-4ac0-970b-f93b7400a711\" (UID: \"f541cf51-6a4c-4ac0-970b-f93b7400a711\") " Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.767473 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.767500 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f541cf51-6a4c-4ac0-970b-f93b7400a711-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.772082 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f541cf51-6a4c-4ac0-970b-f93b7400a711-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f541cf51-6a4c-4ac0-970b-f93b7400a711" (UID: "f541cf51-6a4c-4ac0-970b-f93b7400a711"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.772342 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f541cf51-6a4c-4ac0-970b-f93b7400a711-kube-api-access-frxs9" (OuterVolumeSpecName: "kube-api-access-frxs9") pod "f541cf51-6a4c-4ac0-970b-f93b7400a711" (UID: "f541cf51-6a4c-4ac0-970b-f93b7400a711"). InnerVolumeSpecName "kube-api-access-frxs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.868586 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f541cf51-6a4c-4ac0-970b-f93b7400a711-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:07 crc kubenswrapper[4746]: I0129 16:40:07.868640 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frxs9\" (UniqueName: \"kubernetes.io/projected/f541cf51-6a4c-4ac0-970b-f93b7400a711-kube-api-access-frxs9\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.072032 4746 generic.go:334] "Generic (PLEG): container finished" podID="f541cf51-6a4c-4ac0-970b-f93b7400a711" containerID="d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755" exitCode=0 Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.072100 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" event={"ID":"f541cf51-6a4c-4ac0-970b-f93b7400a711","Type":"ContainerDied","Data":"d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755"} Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.072122 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.072158 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5" event={"ID":"f541cf51-6a4c-4ac0-970b-f93b7400a711","Type":"ContainerDied","Data":"1f871133bcb45d17872159b0f886d23320b77197c1742d610f1cf38ae18c935a"} Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.072220 4746 scope.go:117] "RemoveContainer" containerID="d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.092322 4746 scope.go:117] "RemoveContainer" containerID="d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755" Jan 29 16:40:08 crc kubenswrapper[4746]: E0129 16:40:08.092809 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755\": container with ID starting with d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755 not found: ID does not exist" containerID="d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.092877 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755"} err="failed to get container status \"d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755\": rpc error: code = NotFound desc = could not find container \"d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755\": container with ID starting with d7b6b1236213dec5c0fac81b357e7cfe960491d065f73be82c353d422e8ad755 not found: ID does not exist" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.111134 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5"] Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.115496 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bcfc6b96c-fcjr5"] Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.458699 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f541cf51-6a4c-4ac0-970b-f93b7400a711" path="/var/lib/kubelet/pods/f541cf51-6a4c-4ac0-970b-f93b7400a711/volumes" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.950734 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g"] Jan 29 16:40:08 crc kubenswrapper[4746]: E0129 16:40:08.951028 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f541cf51-6a4c-4ac0-970b-f93b7400a711" containerName="route-controller-manager" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.951043 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f541cf51-6a4c-4ac0-970b-f93b7400a711" containerName="route-controller-manager" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.951232 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f541cf51-6a4c-4ac0-970b-f93b7400a711" containerName="route-controller-manager" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.951701 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.953584 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.954293 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.954492 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.954745 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.956874 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.958253 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:40:08 crc kubenswrapper[4746]: I0129 16:40:08.963934 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g"] Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.084662 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-config\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.084726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-serving-cert\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.084773 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-client-ca\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.084808 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n8g\" (UniqueName: \"kubernetes.io/projected/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-kube-api-access-p9n8g\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.186867 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-config\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.186943 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-serving-cert\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.186997 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-client-ca\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.187033 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9n8g\" (UniqueName: \"kubernetes.io/projected/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-kube-api-access-p9n8g\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.188959 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-client-ca\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.189228 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-config\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.193398 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-serving-cert\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.203416 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9n8g\" (UniqueName: \"kubernetes.io/projected/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-kube-api-access-p9n8g\") pod \"route-controller-manager-549bf89c9f-vtc8g\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.274006 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:09 crc kubenswrapper[4746]: I0129 16:40:09.704363 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g"] Jan 29 16:40:10 crc kubenswrapper[4746]: I0129 16:40:10.087314 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" event={"ID":"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c","Type":"ContainerStarted","Data":"291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607"} Jan 29 16:40:10 crc kubenswrapper[4746]: I0129 16:40:10.087854 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" event={"ID":"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c","Type":"ContainerStarted","Data":"b3e3fba672a52b5921dba987914f9192fb2fbde964f3247b00ad26c3ecb7741e"} Jan 29 16:40:10 crc kubenswrapper[4746]: I0129 16:40:10.087878 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:10 crc kubenswrapper[4746]: I0129 16:40:10.110545 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" podStartSLOduration=3.110522371 podStartE2EDuration="3.110522371s" podCreationTimestamp="2026-01-29 16:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:10.108066632 +0000 UTC m=+332.508651286" watchObservedRunningTime="2026-01-29 16:40:10.110522371 +0000 UTC m=+332.511107015" Jan 29 16:40:10 crc kubenswrapper[4746]: I0129 16:40:10.323241 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:14 crc kubenswrapper[4746]: I0129 16:40:14.885571 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerName="oauth-openshift" containerID="cri-o://6ade74beead1304efcba1cc838d93b7545569ffb73d031ce56143ed61c7b7079" gracePeriod=15 Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.124590 4746 generic.go:334] "Generic (PLEG): container finished" podID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerID="6ade74beead1304efcba1cc838d93b7545569ffb73d031ce56143ed61c7b7079" exitCode=0 Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.125017 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" event={"ID":"9e56505d-05bd-4223-a84d-4622ce4267ee","Type":"ContainerDied","Data":"6ade74beead1304efcba1cc838d93b7545569ffb73d031ce56143ed61c7b7079"} Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.411417 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.444534 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6cf47d78cb-kxsck"] Jan 29 16:40:15 crc kubenswrapper[4746]: E0129 16:40:15.444803 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerName="oauth-openshift" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.444819 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerName="oauth-openshift" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.444950 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" containerName="oauth-openshift" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.445488 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.479131 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cf47d78cb-kxsck"] Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.581724 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-serving-cert\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.581812 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-trusted-ca-bundle\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.581847 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-router-certs\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.581889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-provider-selection\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.581926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-login\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.581953 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-ocp-branding-template\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583129 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-error\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583165 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-session\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583203 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-cliconfig\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583232 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-policies\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583256 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-service-ca\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583294 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-idp-0-file-data\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583314 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-dir\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583337 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8jt8\" (UniqueName: \"kubernetes.io/projected/9e56505d-05bd-4223-a84d-4622ce4267ee-kube-api-access-n8jt8\") pod \"9e56505d-05bd-4223-a84d-4622ce4267ee\" (UID: \"9e56505d-05bd-4223-a84d-4622ce4267ee\") " Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583490 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583539 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583562 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-audit-policies\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583593 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-session\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583620 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583670 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583699 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-error\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583753 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2pmc\" (UniqueName: \"kubernetes.io/projected/e29b0828-885a-48b7-9afb-506fc51bf933-kube-api-access-z2pmc\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583798 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.583820 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-login\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584304 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584358 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584408 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584418 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584450 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584467 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e29b0828-885a-48b7-9afb-506fc51bf933-audit-dir\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584680 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584694 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.584705 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.590000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.593781 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e56505d-05bd-4223-a84d-4622ce4267ee-kube-api-access-n8jt8" (OuterVolumeSpecName: "kube-api-access-n8jt8") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "kube-api-access-n8jt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.598402 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.598748 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.599069 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.599354 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.599490 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.599761 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.601374 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9e56505d-05bd-4223-a84d-4622ce4267ee" (UID: "9e56505d-05bd-4223-a84d-4622ce4267ee"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686629 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2pmc\" (UniqueName: \"kubernetes.io/projected/e29b0828-885a-48b7-9afb-506fc51bf933-kube-api-access-z2pmc\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686704 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-login\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686758 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686781 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e29b0828-885a-48b7-9afb-506fc51bf933-audit-dir\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686808 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686824 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686870 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686912 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-audit-policies\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686937 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-session\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686955 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686982 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.686998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-error\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687048 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687060 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687070 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687079 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687089 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687098 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687109 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687121 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687133 4746 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9e56505d-05bd-4223-a84d-4622ce4267ee-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687145 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8jt8\" (UniqueName: \"kubernetes.io/projected/9e56505d-05bd-4223-a84d-4622ce4267ee-kube-api-access-n8jt8\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.687157 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e56505d-05bd-4223-a84d-4622ce4267ee-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.689123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e29b0828-885a-48b7-9afb-506fc51bf933-audit-dir\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.689410 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-audit-policies\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.689408 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.690556 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.691044 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.692618 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.692856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.694480 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.695550 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.695577 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.696599 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-system-session\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.696382 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-error\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.699137 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e29b0828-885a-48b7-9afb-506fc51bf933-v4-0-config-user-template-login\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.714995 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2pmc\" (UniqueName: \"kubernetes.io/projected/e29b0828-885a-48b7-9afb-506fc51bf933-kube-api-access-z2pmc\") pod \"oauth-openshift-6cf47d78cb-kxsck\" (UID: \"e29b0828-885a-48b7-9afb-506fc51bf933\") " pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:15 crc kubenswrapper[4746]: I0129 16:40:15.764652 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.134462 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" event={"ID":"9e56505d-05bd-4223-a84d-4622ce4267ee","Type":"ContainerDied","Data":"e8dd20204ce388abaeee33041e4f2d1a03e7b9b6b100e74aafd9227a253d2dd6"} Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.134576 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9678f" Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.135002 4746 scope.go:117] "RemoveContainer" containerID="6ade74beead1304efcba1cc838d93b7545569ffb73d031ce56143ed61c7b7079" Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.183239 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9678f"] Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.188766 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9678f"] Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.215479 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cf47d78cb-kxsck"] Jan 29 16:40:16 crc kubenswrapper[4746]: I0129 16:40:16.454156 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e56505d-05bd-4223-a84d-4622ce4267ee" path="/var/lib/kubelet/pods/9e56505d-05bd-4223-a84d-4622ce4267ee/volumes" Jan 29 16:40:17 crc kubenswrapper[4746]: I0129 16:40:17.146591 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" event={"ID":"e29b0828-885a-48b7-9afb-506fc51bf933","Type":"ContainerStarted","Data":"5b6be5fac41f62278862014de41d7289c73e41e10f359165113f54c23f79b77c"} Jan 29 16:40:17 crc kubenswrapper[4746]: I0129 16:40:17.147090 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:17 crc kubenswrapper[4746]: I0129 16:40:17.147113 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" event={"ID":"e29b0828-885a-48b7-9afb-506fc51bf933","Type":"ContainerStarted","Data":"1a0358c389b184c85f6c2c4728ea8844ef9a9bbd71f1b4caa2251cbf475bd928"} Jan 29 16:40:17 crc kubenswrapper[4746]: I0129 16:40:17.160721 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" Jan 29 16:40:17 crc kubenswrapper[4746]: I0129 16:40:17.220572 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6cf47d78cb-kxsck" podStartSLOduration=28.220545973 podStartE2EDuration="28.220545973s" podCreationTimestamp="2026-01-29 16:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:17.179968771 +0000 UTC m=+339.580553415" watchObservedRunningTime="2026-01-29 16:40:17.220545973 +0000 UTC m=+339.621130617" Jan 29 16:40:27 crc kubenswrapper[4746]: I0129 16:40:27.673414 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5967c84899-5c587"] Jan 29 16:40:27 crc kubenswrapper[4746]: I0129 16:40:27.674233 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" podUID="eb74b681-d755-4425-949e-ab8e40c9a10b" containerName="controller-manager" containerID="cri-o://4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2" gracePeriod=30 Jan 29 16:40:27 crc kubenswrapper[4746]: I0129 16:40:27.777801 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g"] Jan 29 16:40:27 crc kubenswrapper[4746]: I0129 16:40:27.778059 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" podUID="2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" containerName="route-controller-manager" containerID="cri-o://291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607" gracePeriod=30 Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.163275 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.171041 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.222767 4746 generic.go:334] "Generic (PLEG): container finished" podID="eb74b681-d755-4425-949e-ab8e40c9a10b" containerID="4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2" exitCode=0 Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.222829 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" event={"ID":"eb74b681-d755-4425-949e-ab8e40c9a10b","Type":"ContainerDied","Data":"4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2"} Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.222859 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" event={"ID":"eb74b681-d755-4425-949e-ab8e40c9a10b","Type":"ContainerDied","Data":"9a29a6f2bb2c281091f6aabb97d0a69e704de316bc96e1efffc6e695eeb7997c"} Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.222875 4746 scope.go:117] "RemoveContainer" containerID="4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.222967 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5967c84899-5c587" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.225852 4746 generic.go:334] "Generic (PLEG): container finished" podID="2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" containerID="291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607" exitCode=0 Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.225911 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" event={"ID":"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c","Type":"ContainerDied","Data":"291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607"} Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.225947 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" event={"ID":"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c","Type":"ContainerDied","Data":"b3e3fba672a52b5921dba987914f9192fb2fbde964f3247b00ad26c3ecb7741e"} Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.226025 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.248104 4746 scope.go:117] "RemoveContainer" containerID="4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2" Jan 29 16:40:28 crc kubenswrapper[4746]: E0129 16:40:28.249605 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2\": container with ID starting with 4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2 not found: ID does not exist" containerID="4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.249656 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2"} err="failed to get container status \"4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2\": rpc error: code = NotFound desc = could not find container \"4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2\": container with ID starting with 4556cff32d51929b66368156c5d6c36a8c84c916cd3431e898efae4df7c867e2 not found: ID does not exist" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.249688 4746 scope.go:117] "RemoveContainer" containerID="291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.264210 4746 scope.go:117] "RemoveContainer" containerID="291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607" Jan 29 16:40:28 crc kubenswrapper[4746]: E0129 16:40:28.264600 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607\": container with ID starting with 291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607 not found: ID does not exist" containerID="291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.264640 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607"} err="failed to get container status \"291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607\": rpc error: code = NotFound desc = could not find container \"291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607\": container with ID starting with 291e5a17975576c196601f5afba08fc96e8bc629aead8fd5d3f0ee1d3cd6c607 not found: ID does not exist" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.288771 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-serving-cert\") pod \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.288876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-client-ca\") pod \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.288922 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb74b681-d755-4425-949e-ab8e40c9a10b-serving-cert\") pod \"eb74b681-d755-4425-949e-ab8e40c9a10b\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.288961 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-proxy-ca-bundles\") pod \"eb74b681-d755-4425-949e-ab8e40c9a10b\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.288992 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-config\") pod \"eb74b681-d755-4425-949e-ab8e40c9a10b\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.289012 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-config\") pod \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.289032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g7ns\" (UniqueName: \"kubernetes.io/projected/eb74b681-d755-4425-949e-ab8e40c9a10b-kube-api-access-4g7ns\") pod \"eb74b681-d755-4425-949e-ab8e40c9a10b\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.289051 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-client-ca\") pod \"eb74b681-d755-4425-949e-ab8e40c9a10b\" (UID: \"eb74b681-d755-4425-949e-ab8e40c9a10b\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.289119 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9n8g\" (UniqueName: \"kubernetes.io/projected/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-kube-api-access-p9n8g\") pod \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\" (UID: \"2b724b6d-36e2-4a4f-9a31-0b388acc3f1c\") " Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.290300 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" (UID: "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.290368 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-config" (OuterVolumeSpecName: "config") pod "eb74b681-d755-4425-949e-ab8e40c9a10b" (UID: "eb74b681-d755-4425-949e-ab8e40c9a10b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.290390 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eb74b681-d755-4425-949e-ab8e40c9a10b" (UID: "eb74b681-d755-4425-949e-ab8e40c9a10b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.290823 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-config" (OuterVolumeSpecName: "config") pod "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" (UID: "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.290948 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb74b681-d755-4425-949e-ab8e40c9a10b" (UID: "eb74b681-d755-4425-949e-ab8e40c9a10b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.295988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" (UID: "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.296367 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-kube-api-access-p9n8g" (OuterVolumeSpecName: "kube-api-access-p9n8g") pod "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" (UID: "2b724b6d-36e2-4a4f-9a31-0b388acc3f1c"). InnerVolumeSpecName "kube-api-access-p9n8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.296565 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb74b681-d755-4425-949e-ab8e40c9a10b-kube-api-access-4g7ns" (OuterVolumeSpecName: "kube-api-access-4g7ns") pod "eb74b681-d755-4425-949e-ab8e40c9a10b" (UID: "eb74b681-d755-4425-949e-ab8e40c9a10b"). InnerVolumeSpecName "kube-api-access-4g7ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.296599 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb74b681-d755-4425-949e-ab8e40c9a10b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb74b681-d755-4425-949e-ab8e40c9a10b" (UID: "eb74b681-d755-4425-949e-ab8e40c9a10b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390334 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390381 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g7ns\" (UniqueName: \"kubernetes.io/projected/eb74b681-d755-4425-949e-ab8e40c9a10b-kube-api-access-4g7ns\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390395 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390404 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9n8g\" (UniqueName: \"kubernetes.io/projected/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-kube-api-access-p9n8g\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390413 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390421 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390430 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb74b681-d755-4425-949e-ab8e40c9a10b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390438 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.390447 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb74b681-d755-4425-949e-ab8e40c9a10b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.546787 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5967c84899-5c587"] Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.550620 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5967c84899-5c587"] Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.558699 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g"] Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.565983 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549bf89c9f-vtc8g"] Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.967976 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg"] Jan 29 16:40:28 crc kubenswrapper[4746]: E0129 16:40:28.968268 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb74b681-d755-4425-949e-ab8e40c9a10b" containerName="controller-manager" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.968283 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb74b681-d755-4425-949e-ab8e40c9a10b" containerName="controller-manager" Jan 29 16:40:28 crc kubenswrapper[4746]: E0129 16:40:28.968294 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" containerName="route-controller-manager" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.968302 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" containerName="route-controller-manager" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.968410 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" containerName="route-controller-manager" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.968423 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb74b681-d755-4425-949e-ab8e40c9a10b" containerName="controller-manager" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.968828 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.971074 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.971349 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.971967 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2"] Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.972129 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.972177 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.972582 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.972699 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.972968 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.974853 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.975018 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.977082 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.977081 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.977252 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.977484 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.979639 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg"] Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.983571 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:40:28 crc kubenswrapper[4746]: I0129 16:40:28.986942 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2"] Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.101365 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-serving-cert\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.101848 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-client-ca\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.101880 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-proxy-ca-bundles\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.101914 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2tx\" (UniqueName: \"kubernetes.io/projected/58d919f0-90c7-4739-a2aa-f5e26679dc80-kube-api-access-6k2tx\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.101949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-config\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.101972 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mc65\" (UniqueName: \"kubernetes.io/projected/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-kube-api-access-2mc65\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.102008 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-client-ca\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.102029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-config\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.102052 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d919f0-90c7-4739-a2aa-f5e26679dc80-serving-cert\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203734 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-client-ca\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203802 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-config\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d919f0-90c7-4739-a2aa-f5e26679dc80-serving-cert\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203870 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-serving-cert\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203917 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-client-ca\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203939 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-proxy-ca-bundles\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.203970 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2tx\" (UniqueName: \"kubernetes.io/projected/58d919f0-90c7-4739-a2aa-f5e26679dc80-kube-api-access-6k2tx\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.204002 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-config\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.204032 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mc65\" (UniqueName: \"kubernetes.io/projected/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-kube-api-access-2mc65\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.205354 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-config\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.205773 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-client-ca\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.205916 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-proxy-ca-bundles\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.206250 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-client-ca\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.207074 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-config\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.210630 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-serving-cert\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.210706 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d919f0-90c7-4739-a2aa-f5e26679dc80-serving-cert\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.223849 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mc65\" (UniqueName: \"kubernetes.io/projected/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-kube-api-access-2mc65\") pod \"route-controller-manager-54f457d794-2lpvg\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.226315 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2tx\" (UniqueName: \"kubernetes.io/projected/58d919f0-90c7-4739-a2aa-f5e26679dc80-kube-api-access-6k2tx\") pod \"controller-manager-5f5cc7b6cc-w85c2\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.290089 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.304597 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.700367 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg"] Jan 29 16:40:29 crc kubenswrapper[4746]: I0129 16:40:29.737326 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2"] Jan 29 16:40:29 crc kubenswrapper[4746]: W0129 16:40:29.738816 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d919f0_90c7_4739_a2aa_f5e26679dc80.slice/crio-a6f2eaf8b033b2768197307f06ed701048c2be8804bde780210f5067a6c98c22 WatchSource:0}: Error finding container a6f2eaf8b033b2768197307f06ed701048c2be8804bde780210f5067a6c98c22: Status 404 returned error can't find the container with id a6f2eaf8b033b2768197307f06ed701048c2be8804bde780210f5067a6c98c22 Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.256969 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" event={"ID":"58d919f0-90c7-4739-a2aa-f5e26679dc80","Type":"ContainerStarted","Data":"df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14"} Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.257024 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" event={"ID":"58d919f0-90c7-4739-a2aa-f5e26679dc80","Type":"ContainerStarted","Data":"a6f2eaf8b033b2768197307f06ed701048c2be8804bde780210f5067a6c98c22"} Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.257256 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.258523 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" event={"ID":"0ce3db6a-0d00-49ad-8feb-e46f901d05d2","Type":"ContainerStarted","Data":"c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25"} Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.258908 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" event={"ID":"0ce3db6a-0d00-49ad-8feb-e46f901d05d2","Type":"ContainerStarted","Data":"cd0d63621034161e209e3616b047b15005e37aa4335d392ba128637a148e646f"} Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.259054 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.263695 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.281085 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" podStartSLOduration=3.2810634370000002 podStartE2EDuration="3.281063437s" podCreationTimestamp="2026-01-29 16:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:30.278431332 +0000 UTC m=+352.679015976" watchObservedRunningTime="2026-01-29 16:40:30.281063437 +0000 UTC m=+352.681648081" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.314713 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" podStartSLOduration=3.314694312 podStartE2EDuration="3.314694312s" podCreationTimestamp="2026-01-29 16:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:30.312825498 +0000 UTC m=+352.713410142" watchObservedRunningTime="2026-01-29 16:40:30.314694312 +0000 UTC m=+352.715278956" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.320901 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.451599 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b724b6d-36e2-4a4f-9a31-0b388acc3f1c" path="/var/lib/kubelet/pods/2b724b6d-36e2-4a4f-9a31-0b388acc3f1c/volumes" Jan 29 16:40:30 crc kubenswrapper[4746]: I0129 16:40:30.452144 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb74b681-d755-4425-949e-ab8e40c9a10b" path="/var/lib/kubelet/pods/eb74b681-d755-4425-949e-ab8e40c9a10b/volumes" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.148035 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jgm9p"] Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.149515 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.166570 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jgm9p"] Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.255626 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.255888 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-trusted-ca\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.255946 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.255981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-bound-sa-token\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.256030 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-registry-certificates\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.256057 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.256078 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82qm\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-kube-api-access-h82qm\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.256099 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-registry-tls\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.283588 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.357919 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-bound-sa-token\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.358458 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-registry-certificates\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.358493 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.358513 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82qm\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-kube-api-access-h82qm\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.358539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-registry-tls\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.358562 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.358580 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-trusted-ca\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.360436 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-trusted-ca\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.364577 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-registry-certificates\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.365005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.371380 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-registry-tls\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.380754 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.385913 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82qm\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-kube-api-access-h82qm\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.388489 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d-bound-sa-token\") pod \"image-registry-66df7c8f76-jgm9p\" (UID: \"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.469758 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:39 crc kubenswrapper[4746]: I0129 16:40:39.900091 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jgm9p"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.319153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" event={"ID":"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d","Type":"ContainerStarted","Data":"0189e06f2730c218863b30096cef9fd3bad966bf8ad0a9519fff10e1fcf1ef03"} Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.319637 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" event={"ID":"aa118d1f-c2a2-42b3-9ad3-1ff19ca2816d","Type":"ContainerStarted","Data":"3c498af217ac329e7b2467320f33361917c8f6f1458d5faab108283f2b5d08b4"} Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.319666 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.337637 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tqxz6"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.337975 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tqxz6" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="registry-server" containerID="cri-o://a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477" gracePeriod=30 Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.350953 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8w7wb"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.353745 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8w7wb" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="registry-server" containerID="cri-o://7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2" gracePeriod=30 Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.370544 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" podStartSLOduration=1.370516303 podStartE2EDuration="1.370516303s" podCreationTimestamp="2026-01-29 16:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:40.366010165 +0000 UTC m=+362.766594809" watchObservedRunningTime="2026-01-29 16:40:40.370516303 +0000 UTC m=+362.771100957" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.375679 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khd9z"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.375925 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" containerID="cri-o://59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355" gracePeriod=30 Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.381865 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x89zf"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.382185 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x89zf" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="registry-server" containerID="cri-o://51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" gracePeriod=30 Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.387871 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5lkm"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.388213 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g5lkm" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="registry-server" containerID="cri-o://90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8" gracePeriod=30 Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.392847 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qmpr4"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.393573 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.407152 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qmpr4"] Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.586896 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a75f7336-fc5b-42b8-8315-d2ec3025832b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.587018 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76c66\" (UniqueName: \"kubernetes.io/projected/a75f7336-fc5b-42b8-8315-d2ec3025832b-kube-api-access-76c66\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.587784 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a75f7336-fc5b-42b8-8315-d2ec3025832b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.615709 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-khd9z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.615785 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.689408 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76c66\" (UniqueName: \"kubernetes.io/projected/a75f7336-fc5b-42b8-8315-d2ec3025832b-kube-api-access-76c66\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.689492 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a75f7336-fc5b-42b8-8315-d2ec3025832b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.689549 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a75f7336-fc5b-42b8-8315-d2ec3025832b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.691218 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a75f7336-fc5b-42b8-8315-d2ec3025832b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.697974 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a75f7336-fc5b-42b8-8315-d2ec3025832b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.707242 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76c66\" (UniqueName: \"kubernetes.io/projected/a75f7336-fc5b-42b8-8315-d2ec3025832b-kube-api-access-76c66\") pod \"marketplace-operator-79b997595-qmpr4\" (UID: \"a75f7336-fc5b-42b8-8315-d2ec3025832b\") " pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.728212 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:40 crc kubenswrapper[4746]: I0129 16:40:40.932563 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.097667 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f is running failed: container process not found" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.098018 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-utilities\") pod \"d1bf7638-7d83-4b72-addf-51bae49b7390\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.098266 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56q9d\" (UniqueName: \"kubernetes.io/projected/d1bf7638-7d83-4b72-addf-51bae49b7390-kube-api-access-56q9d\") pod \"d1bf7638-7d83-4b72-addf-51bae49b7390\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.098341 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-catalog-content\") pod \"d1bf7638-7d83-4b72-addf-51bae49b7390\" (UID: \"d1bf7638-7d83-4b72-addf-51bae49b7390\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.099225 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-utilities" (OuterVolumeSpecName: "utilities") pod "d1bf7638-7d83-4b72-addf-51bae49b7390" (UID: "d1bf7638-7d83-4b72-addf-51bae49b7390"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.099974 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f is running failed: container process not found" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.100371 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f is running failed: container process not found" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.100420 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-x89zf" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="registry-server" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.112385 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bf7638-7d83-4b72-addf-51bae49b7390-kube-api-access-56q9d" (OuterVolumeSpecName: "kube-api-access-56q9d") pod "d1bf7638-7d83-4b72-addf-51bae49b7390" (UID: "d1bf7638-7d83-4b72-addf-51bae49b7390"). InnerVolumeSpecName "kube-api-access-56q9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.149619 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.152015 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.168783 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1bf7638-7d83-4b72-addf-51bae49b7390" (UID: "d1bf7638-7d83-4b72-addf-51bae49b7390"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.178086 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.188990 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.200702 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.200736 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56q9d\" (UniqueName: \"kubernetes.io/projected/d1bf7638-7d83-4b72-addf-51bae49b7390-kube-api-access-56q9d\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.200748 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bf7638-7d83-4b72-addf-51bae49b7390-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302530 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjknz\" (UniqueName: \"kubernetes.io/projected/608c383e-45e1-43dd-b8ad-9a7499953754-kube-api-access-kjknz\") pod \"608c383e-45e1-43dd-b8ad-9a7499953754\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302716 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p82tk\" (UniqueName: \"kubernetes.io/projected/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-kube-api-access-p82tk\") pod \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302785 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-utilities\") pod \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302837 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vhhr\" (UniqueName: \"kubernetes.io/projected/8b74b912-b845-497d-8566-6975dc1fdce5-kube-api-access-6vhhr\") pod \"8b74b912-b845-497d-8566-6975dc1fdce5\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302919 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-utilities\") pod \"8b74b912-b845-497d-8566-6975dc1fdce5\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302945 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-operator-metrics\") pod \"608c383e-45e1-43dd-b8ad-9a7499953754\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302972 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-utilities\") pod \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.302997 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-catalog-content\") pod \"8b74b912-b845-497d-8566-6975dc1fdce5\" (UID: \"8b74b912-b845-497d-8566-6975dc1fdce5\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.303026 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-trusted-ca\") pod \"608c383e-45e1-43dd-b8ad-9a7499953754\" (UID: \"608c383e-45e1-43dd-b8ad-9a7499953754\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.303064 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8dmw\" (UniqueName: \"kubernetes.io/projected/989fe817-0cfd-4b55-aaaa-dd31bb39f219-kube-api-access-m8dmw\") pod \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.303111 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-catalog-content\") pod \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\" (UID: \"989fe817-0cfd-4b55-aaaa-dd31bb39f219\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.303141 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-catalog-content\") pod \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\" (UID: \"b36b404d-6a34-46bf-a5c8-d4322e3ffc07\") " Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.304849 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "608c383e-45e1-43dd-b8ad-9a7499953754" (UID: "608c383e-45e1-43dd-b8ad-9a7499953754"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.305771 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-utilities" (OuterVolumeSpecName: "utilities") pod "b36b404d-6a34-46bf-a5c8-d4322e3ffc07" (UID: "b36b404d-6a34-46bf-a5c8-d4322e3ffc07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.307569 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608c383e-45e1-43dd-b8ad-9a7499953754-kube-api-access-kjknz" (OuterVolumeSpecName: "kube-api-access-kjknz") pod "608c383e-45e1-43dd-b8ad-9a7499953754" (UID: "608c383e-45e1-43dd-b8ad-9a7499953754"). InnerVolumeSpecName "kube-api-access-kjknz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.309543 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-utilities" (OuterVolumeSpecName: "utilities") pod "989fe817-0cfd-4b55-aaaa-dd31bb39f219" (UID: "989fe817-0cfd-4b55-aaaa-dd31bb39f219"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.311145 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b74b912-b845-497d-8566-6975dc1fdce5-kube-api-access-6vhhr" (OuterVolumeSpecName: "kube-api-access-6vhhr") pod "8b74b912-b845-497d-8566-6975dc1fdce5" (UID: "8b74b912-b845-497d-8566-6975dc1fdce5"). InnerVolumeSpecName "kube-api-access-6vhhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.313084 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "608c383e-45e1-43dd-b8ad-9a7499953754" (UID: "608c383e-45e1-43dd-b8ad-9a7499953754"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.314040 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-kube-api-access-p82tk" (OuterVolumeSpecName: "kube-api-access-p82tk") pod "b36b404d-6a34-46bf-a5c8-d4322e3ffc07" (UID: "b36b404d-6a34-46bf-a5c8-d4322e3ffc07"). InnerVolumeSpecName "kube-api-access-p82tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.314071 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/989fe817-0cfd-4b55-aaaa-dd31bb39f219-kube-api-access-m8dmw" (OuterVolumeSpecName: "kube-api-access-m8dmw") pod "989fe817-0cfd-4b55-aaaa-dd31bb39f219" (UID: "989fe817-0cfd-4b55-aaaa-dd31bb39f219"). InnerVolumeSpecName "kube-api-access-m8dmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.319489 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-utilities" (OuterVolumeSpecName: "utilities") pod "8b74b912-b845-497d-8566-6975dc1fdce5" (UID: "8b74b912-b845-497d-8566-6975dc1fdce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.334264 4746 generic.go:334] "Generic (PLEG): container finished" podID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerID="90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8" exitCode=0 Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.334344 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerDied","Data":"90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.334384 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5lkm" event={"ID":"989fe817-0cfd-4b55-aaaa-dd31bb39f219","Type":"ContainerDied","Data":"dc8246597112be5d9f2d4bef6610d1ac120a9d198570a9a20acd634d4a23235c"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.334403 4746 scope.go:117] "RemoveContainer" containerID="90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.334551 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5lkm" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.343652 4746 generic.go:334] "Generic (PLEG): container finished" podID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" exitCode=0 Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.343735 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x89zf" event={"ID":"b36b404d-6a34-46bf-a5c8-d4322e3ffc07","Type":"ContainerDied","Data":"51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.343784 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x89zf" event={"ID":"b36b404d-6a34-46bf-a5c8-d4322e3ffc07","Type":"ContainerDied","Data":"739b80845a65c5e19ba26dff4bc2d5504cdb0e921690158be4addf936b59563b"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.343871 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x89zf" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.351035 4746 generic.go:334] "Generic (PLEG): container finished" podID="608c383e-45e1-43dd-b8ad-9a7499953754" containerID="59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355" exitCode=0 Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.351095 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" event={"ID":"608c383e-45e1-43dd-b8ad-9a7499953754","Type":"ContainerDied","Data":"59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.351122 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" event={"ID":"608c383e-45e1-43dd-b8ad-9a7499953754","Type":"ContainerDied","Data":"f19df6ebdb087666650de4154adaa72cee0da99a2ec72c8d83851a0dc7ec301c"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.351365 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-khd9z" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.352325 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qmpr4"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.363984 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerID="a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477" exitCode=0 Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.364037 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerDied","Data":"a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.364059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxz6" event={"ID":"d1bf7638-7d83-4b72-addf-51bae49b7390","Type":"ContainerDied","Data":"271188565ba6560f1e40b5cdc795fda5228931abe3d9a8af55d7c6872b38d539"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.364135 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxz6" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.374784 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerDied","Data":"7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.374980 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8w7wb" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.376266 4746 scope.go:117] "RemoveContainer" containerID="9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.374612 4746 generic.go:334] "Generic (PLEG): container finished" podID="8b74b912-b845-497d-8566-6975dc1fdce5" containerID="7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2" exitCode=0 Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.379035 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8w7wb" event={"ID":"8b74b912-b845-497d-8566-6975dc1fdce5","Type":"ContainerDied","Data":"675719dc6ef108ae8af9842143fcaf233144201bfe64d617bb1d46f36f719be7"} Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.383820 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b36b404d-6a34-46bf-a5c8-d4322e3ffc07" (UID: "b36b404d-6a34-46bf-a5c8-d4322e3ffc07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406567 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406612 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjknz\" (UniqueName: \"kubernetes.io/projected/608c383e-45e1-43dd-b8ad-9a7499953754-kube-api-access-kjknz\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406624 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p82tk\" (UniqueName: \"kubernetes.io/projected/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-kube-api-access-p82tk\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406634 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406644 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vhhr\" (UniqueName: \"kubernetes.io/projected/8b74b912-b845-497d-8566-6975dc1fdce5-kube-api-access-6vhhr\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406652 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406677 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406685 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b36b404d-6a34-46bf-a5c8-d4322e3ffc07-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406694 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/608c383e-45e1-43dd-b8ad-9a7499953754-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.406702 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8dmw\" (UniqueName: \"kubernetes.io/projected/989fe817-0cfd-4b55-aaaa-dd31bb39f219-kube-api-access-m8dmw\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.416101 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b74b912-b845-497d-8566-6975dc1fdce5" (UID: "8b74b912-b845-497d-8566-6975dc1fdce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.423134 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khd9z"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.444487 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-khd9z"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.448480 4746 scope.go:117] "RemoveContainer" containerID="907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.448599 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tqxz6"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.453046 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tqxz6"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.491361 4746 scope.go:117] "RemoveContainer" containerID="90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.491984 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8\": container with ID starting with 90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8 not found: ID does not exist" containerID="90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.492036 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8"} err="failed to get container status \"90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8\": rpc error: code = NotFound desc = could not find container \"90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8\": container with ID starting with 90f799eab38d800391c55454658cf0e4324b76d398e55c1f32f129cb92dbcff8 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.492082 4746 scope.go:117] "RemoveContainer" containerID="9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.492892 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6\": container with ID starting with 9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6 not found: ID does not exist" containerID="9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.492935 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6"} err="failed to get container status \"9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6\": rpc error: code = NotFound desc = could not find container \"9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6\": container with ID starting with 9a5b9124aa5a2889782e0e66cc99759370d31d5054a3cbe9027e716810d8baf6 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.492968 4746 scope.go:117] "RemoveContainer" containerID="907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.493627 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9\": container with ID starting with 907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9 not found: ID does not exist" containerID="907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.493658 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9"} err="failed to get container status \"907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9\": rpc error: code = NotFound desc = could not find container \"907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9\": container with ID starting with 907eac06c76c41a21c584c7e61943158c207acf9bf90ca76870d4d70a5d143b9 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.493678 4746 scope.go:117] "RemoveContainer" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.505563 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "989fe817-0cfd-4b55-aaaa-dd31bb39f219" (UID: "989fe817-0cfd-4b55-aaaa-dd31bb39f219"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.510206 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b74b912-b845-497d-8566-6975dc1fdce5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.510409 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/989fe817-0cfd-4b55-aaaa-dd31bb39f219-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.513559 4746 scope.go:117] "RemoveContainer" containerID="c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.530930 4746 scope.go:117] "RemoveContainer" containerID="8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.545961 4746 scope.go:117] "RemoveContainer" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.546461 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f\": container with ID starting with 51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f not found: ID does not exist" containerID="51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.546502 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f"} err="failed to get container status \"51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f\": rpc error: code = NotFound desc = could not find container \"51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f\": container with ID starting with 51e1b5cf868b6341ce13b3d6d8d119dcb1b0c9b8ea84f54b1d1e3e6e855b387f not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.546547 4746 scope.go:117] "RemoveContainer" containerID="c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.546942 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5\": container with ID starting with c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5 not found: ID does not exist" containerID="c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.546969 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5"} err="failed to get container status \"c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5\": rpc error: code = NotFound desc = could not find container \"c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5\": container with ID starting with c99ccfbcac8678ab64316ea7b4823499691e1333e979a3eb82073bbff43209b5 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.546987 4746 scope.go:117] "RemoveContainer" containerID="8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.547363 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45\": container with ID starting with 8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45 not found: ID does not exist" containerID="8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.547419 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45"} err="failed to get container status \"8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45\": rpc error: code = NotFound desc = could not find container \"8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45\": container with ID starting with 8ea8a3a81bd92841f76d67bf32c51d42bbd68871c054b2f86ab2f7d472797c45 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.547452 4746 scope.go:117] "RemoveContainer" containerID="59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.567573 4746 scope.go:117] "RemoveContainer" containerID="a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.597415 4746 scope.go:117] "RemoveContainer" containerID="59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.598158 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355\": container with ID starting with 59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355 not found: ID does not exist" containerID="59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.598241 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355"} err="failed to get container status \"59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355\": rpc error: code = NotFound desc = could not find container \"59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355\": container with ID starting with 59163592ab2d95eeb228cfcbce8522451930e114d3d113671a23588cda540355 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.598291 4746 scope.go:117] "RemoveContainer" containerID="a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.599139 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60\": container with ID starting with a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60 not found: ID does not exist" containerID="a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.599175 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60"} err="failed to get container status \"a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60\": rpc error: code = NotFound desc = could not find container \"a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60\": container with ID starting with a47857e325c1f197c4e0c97e6661569f66db5d06e1788b131666c8c405371e60 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.599321 4746 scope.go:117] "RemoveContainer" containerID="a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.614427 4746 scope.go:117] "RemoveContainer" containerID="ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.633978 4746 scope.go:117] "RemoveContainer" containerID="61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.669041 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g5lkm"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.678325 4746 scope.go:117] "RemoveContainer" containerID="a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.683867 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g5lkm"] Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.684466 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477\": container with ID starting with a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477 not found: ID does not exist" containerID="a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.684513 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477"} err="failed to get container status \"a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477\": rpc error: code = NotFound desc = could not find container \"a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477\": container with ID starting with a7db2b3b6031df583b35c8019ef59f34bc0581ad94eafe259f31f4ba03e90477 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.684553 4746 scope.go:117] "RemoveContainer" containerID="ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.685479 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560\": container with ID starting with ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560 not found: ID does not exist" containerID="ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.685515 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560"} err="failed to get container status \"ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560\": rpc error: code = NotFound desc = could not find container \"ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560\": container with ID starting with ee4f9d8eb31caffcece5b42bda1316f4372140c6b1123d492ef8e2edd69f2560 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.685537 4746 scope.go:117] "RemoveContainer" containerID="61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.686014 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d\": container with ID starting with 61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d not found: ID does not exist" containerID="61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.686044 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d"} err="failed to get container status \"61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d\": rpc error: code = NotFound desc = could not find container \"61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d\": container with ID starting with 61f804780d3e84f6ed69fe333370715e8b41236866dd9b12f35ba8bd76817e3d not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.686066 4746 scope.go:117] "RemoveContainer" containerID="7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.696000 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x89zf"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.699393 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x89zf"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.717091 4746 scope.go:117] "RemoveContainer" containerID="31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.720253 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8w7wb"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.728401 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8w7wb"] Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.750794 4746 scope.go:117] "RemoveContainer" containerID="6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.769097 4746 scope.go:117] "RemoveContainer" containerID="7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.769628 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2\": container with ID starting with 7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2 not found: ID does not exist" containerID="7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.769675 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2"} err="failed to get container status \"7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2\": rpc error: code = NotFound desc = could not find container \"7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2\": container with ID starting with 7aff7fd8376f216b30a408355988fc29cfa0a9d6717f8bad326a3bd5c6c855d2 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.769739 4746 scope.go:117] "RemoveContainer" containerID="31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.770102 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2\": container with ID starting with 31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2 not found: ID does not exist" containerID="31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.770139 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2"} err="failed to get container status \"31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2\": rpc error: code = NotFound desc = could not find container \"31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2\": container with ID starting with 31d9e2815afdae80862a2d370460d960ea13bdba42db585fb156cb53a8ef26f2 not found: ID does not exist" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.770167 4746 scope.go:117] "RemoveContainer" containerID="6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb" Jan 29 16:40:41 crc kubenswrapper[4746]: E0129 16:40:41.770434 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb\": container with ID starting with 6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb not found: ID does not exist" containerID="6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb" Jan 29 16:40:41 crc kubenswrapper[4746]: I0129 16:40:41.770457 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb"} err="failed to get container status \"6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb\": rpc error: code = NotFound desc = could not find container \"6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb\": container with ID starting with 6acb853830a1c8efdcc1c37c9e17a6f56b50a055c5c1626bf85334c10ba93acb not found: ID does not exist" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.388633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" event={"ID":"a75f7336-fc5b-42b8-8315-d2ec3025832b","Type":"ContainerStarted","Data":"26cf8cf11e90fd785cd65c9f79b33636062d47093fb26e771c25a0b7a7155bc6"} Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.389135 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" event={"ID":"a75f7336-fc5b-42b8-8315-d2ec3025832b","Type":"ContainerStarted","Data":"d0200efbff7d0128b710d39d3bca571f7ba80338ea065ec72e3bc09215d08a32"} Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.389159 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.393629 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.410106 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qmpr4" podStartSLOduration=2.410087665 podStartE2EDuration="2.410087665s" podCreationTimestamp="2026-01-29 16:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:42.407342748 +0000 UTC m=+364.807927392" watchObservedRunningTime="2026-01-29 16:40:42.410087665 +0000 UTC m=+364.810672309" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.452828 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" path="/var/lib/kubelet/pods/608c383e-45e1-43dd-b8ad-9a7499953754/volumes" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.453950 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" path="/var/lib/kubelet/pods/8b74b912-b845-497d-8566-6975dc1fdce5/volumes" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.454648 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" path="/var/lib/kubelet/pods/989fe817-0cfd-4b55-aaaa-dd31bb39f219/volumes" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.456808 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" path="/var/lib/kubelet/pods/b36b404d-6a34-46bf-a5c8-d4322e3ffc07/volumes" Jan 29 16:40:42 crc kubenswrapper[4746]: I0129 16:40:42.457430 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" path="/var/lib/kubelet/pods/d1bf7638-7d83-4b72-addf-51bae49b7390/volumes" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.787753 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfcrm"] Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788006 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788023 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788034 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788042 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788055 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788063 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788074 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788082 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788094 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788101 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788114 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788122 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788135 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788144 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788153 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788160 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788170 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788178 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788232 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788241 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="extract-content" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788255 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788262 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="extract-utilities" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788274 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788281 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788290 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788298 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788417 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bf7638-7d83-4b72-addf-51bae49b7390" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788431 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788443 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36b404d-6a34-46bf-a5c8-d4322e3ffc07" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788454 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="989fe817-0cfd-4b55-aaaa-dd31bb39f219" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788462 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b74b912-b845-497d-8566-6975dc1fdce5" containerName="registry-server" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788474 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" Jan 29 16:40:43 crc kubenswrapper[4746]: E0129 16:40:43.788580 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.788590 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="608c383e-45e1-43dd-b8ad-9a7499953754" containerName="marketplace-operator" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.789349 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.792548 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.803868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfcrm"] Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.952046 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-utilities\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.952443 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhc2\" (UniqueName: \"kubernetes.io/projected/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-kube-api-access-czhc2\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.952551 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-catalog-content\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.981591 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrwjj"] Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.983599 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:43 crc kubenswrapper[4746]: I0129 16:40:43.986736 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.030049 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrwjj"] Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.053873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-utilities\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.054286 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czhc2\" (UniqueName: \"kubernetes.io/projected/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-kube-api-access-czhc2\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.054389 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-catalog-content\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.054468 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-utilities\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.054900 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-catalog-content\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.073581 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czhc2\" (UniqueName: \"kubernetes.io/projected/6c5d371b-e906-4098-b72c-7b41c5fd2ec6-kube-api-access-czhc2\") pod \"redhat-marketplace-mfcrm\" (UID: \"6c5d371b-e906-4098-b72c-7b41c5fd2ec6\") " pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.120387 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.156714 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vtk\" (UniqueName: \"kubernetes.io/projected/a2a7e5df-24f7-4400-93be-0c812a827d15-kube-api-access-j5vtk\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.156770 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2a7e5df-24f7-4400-93be-0c812a827d15-catalog-content\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.157225 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2a7e5df-24f7-4400-93be-0c812a827d15-utilities\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.258166 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vtk\" (UniqueName: \"kubernetes.io/projected/a2a7e5df-24f7-4400-93be-0c812a827d15-kube-api-access-j5vtk\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.258588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2a7e5df-24f7-4400-93be-0c812a827d15-catalog-content\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.258626 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2a7e5df-24f7-4400-93be-0c812a827d15-utilities\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.259544 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2a7e5df-24f7-4400-93be-0c812a827d15-utilities\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.259655 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2a7e5df-24f7-4400-93be-0c812a827d15-catalog-content\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.279239 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vtk\" (UniqueName: \"kubernetes.io/projected/a2a7e5df-24f7-4400-93be-0c812a827d15-kube-api-access-j5vtk\") pod \"redhat-operators-jrwjj\" (UID: \"a2a7e5df-24f7-4400-93be-0c812a827d15\") " pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.296499 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.524362 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfcrm"] Jan 29 16:40:44 crc kubenswrapper[4746]: W0129 16:40:44.531994 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c5d371b_e906_4098_b72c_7b41c5fd2ec6.slice/crio-d6eb95f6cbdd88f212baed21affa59b2c2f1ebb5465f41fbfcbdaf5c69faaf16 WatchSource:0}: Error finding container d6eb95f6cbdd88f212baed21affa59b2c2f1ebb5465f41fbfcbdaf5c69faaf16: Status 404 returned error can't find the container with id d6eb95f6cbdd88f212baed21affa59b2c2f1ebb5465f41fbfcbdaf5c69faaf16 Jan 29 16:40:44 crc kubenswrapper[4746]: I0129 16:40:44.755180 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrwjj"] Jan 29 16:40:45 crc kubenswrapper[4746]: I0129 16:40:45.418487 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c5d371b-e906-4098-b72c-7b41c5fd2ec6" containerID="a3f89714aa0b6f174cfeb3a2f82cbdea480f54d0a8dd683bc0c9af47b085b15e" exitCode=0 Jan 29 16:40:45 crc kubenswrapper[4746]: I0129 16:40:45.419001 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfcrm" event={"ID":"6c5d371b-e906-4098-b72c-7b41c5fd2ec6","Type":"ContainerDied","Data":"a3f89714aa0b6f174cfeb3a2f82cbdea480f54d0a8dd683bc0c9af47b085b15e"} Jan 29 16:40:45 crc kubenswrapper[4746]: I0129 16:40:45.419073 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfcrm" event={"ID":"6c5d371b-e906-4098-b72c-7b41c5fd2ec6","Type":"ContainerStarted","Data":"d6eb95f6cbdd88f212baed21affa59b2c2f1ebb5465f41fbfcbdaf5c69faaf16"} Jan 29 16:40:45 crc kubenswrapper[4746]: I0129 16:40:45.422513 4746 generic.go:334] "Generic (PLEG): container finished" podID="a2a7e5df-24f7-4400-93be-0c812a827d15" containerID="7d3d2b777b646f5b57a81491dd48b9fd59ead69617e39fa4d11b1e450d62e667" exitCode=0 Jan 29 16:40:45 crc kubenswrapper[4746]: I0129 16:40:45.422547 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrwjj" event={"ID":"a2a7e5df-24f7-4400-93be-0c812a827d15","Type":"ContainerDied","Data":"7d3d2b777b646f5b57a81491dd48b9fd59ead69617e39fa4d11b1e450d62e667"} Jan 29 16:40:45 crc kubenswrapper[4746]: I0129 16:40:45.422565 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrwjj" event={"ID":"a2a7e5df-24f7-4400-93be-0c812a827d15","Type":"ContainerStarted","Data":"ba9eb5937c0b2155f3f7c3c144b4e204123df11901f43a04264bf6548eb774a6"} Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.183654 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-98p6v"] Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.185658 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.188135 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.197027 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98p6v"] Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.300643 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da150909-e111-42fc-aa44-a4e181c3e57a-utilities\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.300712 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da150909-e111-42fc-aa44-a4e181c3e57a-catalog-content\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.300755 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84dp\" (UniqueName: \"kubernetes.io/projected/da150909-e111-42fc-aa44-a4e181c3e57a-kube-api-access-n84dp\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.386379 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wjcnr"] Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.387411 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.390055 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.401786 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da150909-e111-42fc-aa44-a4e181c3e57a-catalog-content\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.401842 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n84dp\" (UniqueName: \"kubernetes.io/projected/da150909-e111-42fc-aa44-a4e181c3e57a-kube-api-access-n84dp\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.401925 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da150909-e111-42fc-aa44-a4e181c3e57a-utilities\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.402467 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da150909-e111-42fc-aa44-a4e181c3e57a-catalog-content\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.402499 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da150909-e111-42fc-aa44-a4e181c3e57a-utilities\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.419315 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjcnr"] Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.429896 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrwjj" event={"ID":"a2a7e5df-24f7-4400-93be-0c812a827d15","Type":"ContainerStarted","Data":"61af91f59951b5ed250f0f8a7f691c4fec9482c33155fa158a3466beec5b61d6"} Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.437739 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n84dp\" (UniqueName: \"kubernetes.io/projected/da150909-e111-42fc-aa44-a4e181c3e57a-kube-api-access-n84dp\") pod \"certified-operators-98p6v\" (UID: \"da150909-e111-42fc-aa44-a4e181c3e57a\") " pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.444314 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfcrm" event={"ID":"6c5d371b-e906-4098-b72c-7b41c5fd2ec6","Type":"ContainerStarted","Data":"18a3e8d60d69458837a7c23fdc11a89745ada364ccdf4b0efa1faefb494e12ff"} Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.503317 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-catalog-content\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.503384 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-utilities\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.503475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-427qg\" (UniqueName: \"kubernetes.io/projected/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-kube-api-access-427qg\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.525651 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.605081 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-utilities\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.605180 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-427qg\" (UniqueName: \"kubernetes.io/projected/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-kube-api-access-427qg\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.605266 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-catalog-content\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.606226 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-catalog-content\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.606637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-utilities\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.630337 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-427qg\" (UniqueName: \"kubernetes.io/projected/d26e9e37-91e8-407b-b16d-2ce68fa11f2d-kube-api-access-427qg\") pod \"community-operators-wjcnr\" (UID: \"d26e9e37-91e8-407b-b16d-2ce68fa11f2d\") " pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.703556 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:46 crc kubenswrapper[4746]: I0129 16:40:46.929030 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98p6v"] Jan 29 16:40:46 crc kubenswrapper[4746]: W0129 16:40:46.935378 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda150909_e111_42fc_aa44_a4e181c3e57a.slice/crio-61804237647a5f721ab51c2e1fe842d616774ebbbf7b4d8c02675e2af711ef30 WatchSource:0}: Error finding container 61804237647a5f721ab51c2e1fe842d616774ebbbf7b4d8c02675e2af711ef30: Status 404 returned error can't find the container with id 61804237647a5f721ab51c2e1fe842d616774ebbbf7b4d8c02675e2af711ef30 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.147835 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wjcnr"] Jan 29 16:40:47 crc kubenswrapper[4746]: W0129 16:40:47.154804 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd26e9e37_91e8_407b_b16d_2ce68fa11f2d.slice/crio-952f8c2d96edd8f497222aa09e49cc0b2bd5b9e17814bb695283b2541de7ca0b WatchSource:0}: Error finding container 952f8c2d96edd8f497222aa09e49cc0b2bd5b9e17814bb695283b2541de7ca0b: Status 404 returned error can't find the container with id 952f8c2d96edd8f497222aa09e49cc0b2bd5b9e17814bb695283b2541de7ca0b Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.417329 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2"] Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.417938 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" podUID="58d919f0-90c7-4739-a2aa-f5e26679dc80" containerName="controller-manager" containerID="cri-o://df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14" gracePeriod=30 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.457582 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c5d371b-e906-4098-b72c-7b41c5fd2ec6" containerID="18a3e8d60d69458837a7c23fdc11a89745ada364ccdf4b0efa1faefb494e12ff" exitCode=0 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.457679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfcrm" event={"ID":"6c5d371b-e906-4098-b72c-7b41c5fd2ec6","Type":"ContainerDied","Data":"18a3e8d60d69458837a7c23fdc11a89745ada364ccdf4b0efa1faefb494e12ff"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.457717 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfcrm" event={"ID":"6c5d371b-e906-4098-b72c-7b41c5fd2ec6","Type":"ContainerStarted","Data":"8c889d13bd6ed51781b3e32570f4e5fbf25c7788f0a25d815a303c6478d12cd4"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.462857 4746 generic.go:334] "Generic (PLEG): container finished" podID="da150909-e111-42fc-aa44-a4e181c3e57a" containerID="6d8151d2fdc00ec0c7cb9bfe142fea2bb1502660bcc62df3f26d8a35f4661523" exitCode=0 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.462912 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98p6v" event={"ID":"da150909-e111-42fc-aa44-a4e181c3e57a","Type":"ContainerDied","Data":"6d8151d2fdc00ec0c7cb9bfe142fea2bb1502660bcc62df3f26d8a35f4661523"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.462936 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98p6v" event={"ID":"da150909-e111-42fc-aa44-a4e181c3e57a","Type":"ContainerStarted","Data":"61804237647a5f721ab51c2e1fe842d616774ebbbf7b4d8c02675e2af711ef30"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.465574 4746 generic.go:334] "Generic (PLEG): container finished" podID="d26e9e37-91e8-407b-b16d-2ce68fa11f2d" containerID="2d5295c603a4ba38814fec0ea39dea44e25fe469eebc95ac8e24d3f6a4798f8e" exitCode=0 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.465927 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjcnr" event={"ID":"d26e9e37-91e8-407b-b16d-2ce68fa11f2d","Type":"ContainerDied","Data":"2d5295c603a4ba38814fec0ea39dea44e25fe469eebc95ac8e24d3f6a4798f8e"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.466017 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjcnr" event={"ID":"d26e9e37-91e8-407b-b16d-2ce68fa11f2d","Type":"ContainerStarted","Data":"952f8c2d96edd8f497222aa09e49cc0b2bd5b9e17814bb695283b2541de7ca0b"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.468965 4746 generic.go:334] "Generic (PLEG): container finished" podID="a2a7e5df-24f7-4400-93be-0c812a827d15" containerID="61af91f59951b5ed250f0f8a7f691c4fec9482c33155fa158a3466beec5b61d6" exitCode=0 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.469014 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrwjj" event={"ID":"a2a7e5df-24f7-4400-93be-0c812a827d15","Type":"ContainerDied","Data":"61af91f59951b5ed250f0f8a7f691c4fec9482c33155fa158a3466beec5b61d6"} Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.486330 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfcrm" podStartSLOduration=3.011304907 podStartE2EDuration="4.486303497s" podCreationTimestamp="2026-01-29 16:40:43 +0000 UTC" firstStartedPulling="2026-01-29 16:40:45.421001135 +0000 UTC m=+367.821585779" lastFinishedPulling="2026-01-29 16:40:46.895999725 +0000 UTC m=+369.296584369" observedRunningTime="2026-01-29 16:40:47.485816134 +0000 UTC m=+369.886400778" watchObservedRunningTime="2026-01-29 16:40:47.486303497 +0000 UTC m=+369.886888151" Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.503618 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg"] Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.503878 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" podUID="0ce3db6a-0d00-49ad-8feb-e46f901d05d2" containerName="route-controller-manager" containerID="cri-o://c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25" gracePeriod=30 Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.961455 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:47 crc kubenswrapper[4746]: I0129 16:40:47.990701 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.123988 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-config\") pod \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.124733 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-client-ca\") pod \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.124825 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-config\") pod \"58d919f0-90c7-4739-a2aa-f5e26679dc80\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.124883 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-proxy-ca-bundles\") pod \"58d919f0-90c7-4739-a2aa-f5e26679dc80\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.124983 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-serving-cert\") pod \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125021 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d919f0-90c7-4739-a2aa-f5e26679dc80-serving-cert\") pod \"58d919f0-90c7-4739-a2aa-f5e26679dc80\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125045 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mc65\" (UniqueName: \"kubernetes.io/projected/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-kube-api-access-2mc65\") pod \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\" (UID: \"0ce3db6a-0d00-49ad-8feb-e46f901d05d2\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125070 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k2tx\" (UniqueName: \"kubernetes.io/projected/58d919f0-90c7-4739-a2aa-f5e26679dc80-kube-api-access-6k2tx\") pod \"58d919f0-90c7-4739-a2aa-f5e26679dc80\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125104 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-client-ca\") pod \"58d919f0-90c7-4739-a2aa-f5e26679dc80\" (UID: \"58d919f0-90c7-4739-a2aa-f5e26679dc80\") " Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125356 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-config" (OuterVolumeSpecName: "config") pod "0ce3db6a-0d00-49ad-8feb-e46f901d05d2" (UID: "0ce3db6a-0d00-49ad-8feb-e46f901d05d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ce3db6a-0d00-49ad-8feb-e46f901d05d2" (UID: "0ce3db6a-0d00-49ad-8feb-e46f901d05d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125658 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "58d919f0-90c7-4739-a2aa-f5e26679dc80" (UID: "58d919f0-90c7-4739-a2aa-f5e26679dc80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125834 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125847 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.125860 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.126050 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-client-ca" (OuterVolumeSpecName: "client-ca") pod "58d919f0-90c7-4739-a2aa-f5e26679dc80" (UID: "58d919f0-90c7-4739-a2aa-f5e26679dc80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.126291 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-config" (OuterVolumeSpecName: "config") pod "58d919f0-90c7-4739-a2aa-f5e26679dc80" (UID: "58d919f0-90c7-4739-a2aa-f5e26679dc80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.132490 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d919f0-90c7-4739-a2aa-f5e26679dc80-kube-api-access-6k2tx" (OuterVolumeSpecName: "kube-api-access-6k2tx") pod "58d919f0-90c7-4739-a2aa-f5e26679dc80" (UID: "58d919f0-90c7-4739-a2aa-f5e26679dc80"). InnerVolumeSpecName "kube-api-access-6k2tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.133395 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ce3db6a-0d00-49ad-8feb-e46f901d05d2" (UID: "0ce3db6a-0d00-49ad-8feb-e46f901d05d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.134328 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-kube-api-access-2mc65" (OuterVolumeSpecName: "kube-api-access-2mc65") pod "0ce3db6a-0d00-49ad-8feb-e46f901d05d2" (UID: "0ce3db6a-0d00-49ad-8feb-e46f901d05d2"). InnerVolumeSpecName "kube-api-access-2mc65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.134434 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d919f0-90c7-4739-a2aa-f5e26679dc80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "58d919f0-90c7-4739-a2aa-f5e26679dc80" (UID: "58d919f0-90c7-4739-a2aa-f5e26679dc80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.227098 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.227142 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58d919f0-90c7-4739-a2aa-f5e26679dc80-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.227157 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mc65\" (UniqueName: \"kubernetes.io/projected/0ce3db6a-0d00-49ad-8feb-e46f901d05d2-kube-api-access-2mc65\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.227171 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k2tx\" (UniqueName: \"kubernetes.io/projected/58d919f0-90c7-4739-a2aa-f5e26679dc80-kube-api-access-6k2tx\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.227199 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.227213 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58d919f0-90c7-4739-a2aa-f5e26679dc80-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.484441 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjcnr" event={"ID":"d26e9e37-91e8-407b-b16d-2ce68fa11f2d","Type":"ContainerStarted","Data":"ec43c4b200a30cdd0286ad26b6614475bb36aba1e37cc8950569819bb64ebf84"} Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.487607 4746 generic.go:334] "Generic (PLEG): container finished" podID="0ce3db6a-0d00-49ad-8feb-e46f901d05d2" containerID="c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25" exitCode=0 Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.487732 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.488425 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" event={"ID":"0ce3db6a-0d00-49ad-8feb-e46f901d05d2","Type":"ContainerDied","Data":"c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25"} Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.488676 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg" event={"ID":"0ce3db6a-0d00-49ad-8feb-e46f901d05d2","Type":"ContainerDied","Data":"cd0d63621034161e209e3616b047b15005e37aa4335d392ba128637a148e646f"} Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.488696 4746 scope.go:117] "RemoveContainer" containerID="c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.494737 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrwjj" event={"ID":"a2a7e5df-24f7-4400-93be-0c812a827d15","Type":"ContainerStarted","Data":"c0abd304f54e6ef216463d49188c4eb07e2f30e41cc0ead079276cd935cff4cf"} Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.499392 4746 generic.go:334] "Generic (PLEG): container finished" podID="58d919f0-90c7-4739-a2aa-f5e26679dc80" containerID="df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14" exitCode=0 Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.500835 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.501346 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" event={"ID":"58d919f0-90c7-4739-a2aa-f5e26679dc80","Type":"ContainerDied","Data":"df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14"} Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.501383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2" event={"ID":"58d919f0-90c7-4739-a2aa-f5e26679dc80","Type":"ContainerDied","Data":"a6f2eaf8b033b2768197307f06ed701048c2be8804bde780210f5067a6c98c22"} Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.527505 4746 scope.go:117] "RemoveContainer" containerID="c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25" Jan 29 16:40:48 crc kubenswrapper[4746]: E0129 16:40:48.529667 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25\": container with ID starting with c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25 not found: ID does not exist" containerID="c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.529709 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25"} err="failed to get container status \"c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25\": rpc error: code = NotFound desc = could not find container \"c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25\": container with ID starting with c9e5249fed0db79ee1a020a32d86d6380f5ae62f4ea0f3e8b0f34a0c38aafb25 not found: ID does not exist" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.529736 4746 scope.go:117] "RemoveContainer" containerID="df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.556206 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2"] Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.569590 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f5cc7b6cc-w85c2"] Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.570504 4746 scope.go:117] "RemoveContainer" containerID="df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14" Jan 29 16:40:48 crc kubenswrapper[4746]: E0129 16:40:48.571073 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14\": container with ID starting with df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14 not found: ID does not exist" containerID="df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.571112 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14"} err="failed to get container status \"df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14\": rpc error: code = NotFound desc = could not find container \"df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14\": container with ID starting with df0aca29798f71ca24d92f05641a39bc2284a883cc8750e1550987f543738c14 not found: ID does not exist" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.571468 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrwjj" podStartSLOduration=3.016965349 podStartE2EDuration="5.571454414s" podCreationTimestamp="2026-01-29 16:40:43 +0000 UTC" firstStartedPulling="2026-01-29 16:40:45.424958758 +0000 UTC m=+367.825543402" lastFinishedPulling="2026-01-29 16:40:47.979447823 +0000 UTC m=+370.380032467" observedRunningTime="2026-01-29 16:40:48.550622982 +0000 UTC m=+370.951207626" watchObservedRunningTime="2026-01-29 16:40:48.571454414 +0000 UTC m=+370.972039058" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.581627 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg"] Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.585026 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54f457d794-2lpvg"] Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.983405 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5888f7cc49-42wwh"] Jan 29 16:40:48 crc kubenswrapper[4746]: E0129 16:40:48.983646 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d919f0-90c7-4739-a2aa-f5e26679dc80" containerName="controller-manager" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.983663 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d919f0-90c7-4739-a2aa-f5e26679dc80" containerName="controller-manager" Jan 29 16:40:48 crc kubenswrapper[4746]: E0129 16:40:48.983676 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce3db6a-0d00-49ad-8feb-e46f901d05d2" containerName="route-controller-manager" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.983682 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce3db6a-0d00-49ad-8feb-e46f901d05d2" containerName="route-controller-manager" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.983801 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d919f0-90c7-4739-a2aa-f5e26679dc80" containerName="controller-manager" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.983815 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce3db6a-0d00-49ad-8feb-e46f901d05d2" containerName="route-controller-manager" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.984225 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.987826 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.988379 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.988546 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.989344 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.989802 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.992719 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:40:48 crc kubenswrapper[4746]: I0129 16:40:48.993563 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl"] Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.000709 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.013825 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.036546 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.037176 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.037179 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.037658 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.037839 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.038129 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.063294 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5888f7cc49-42wwh"] Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.065211 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.065272 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.071529 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl"] Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.141171 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tjb\" (UniqueName: \"kubernetes.io/projected/44552f3a-6fde-428f-a77a-111f43de4209-kube-api-access-b8tjb\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.141615 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e97f47c-728c-4032-a49a-e9bcff250c4e-serving-cert\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.141763 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e97f47c-728c-4032-a49a-e9bcff250c4e-client-ca\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.141870 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4m4\" (UniqueName: \"kubernetes.io/projected/9e97f47c-728c-4032-a49a-e9bcff250c4e-kube-api-access-8b4m4\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.141985 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e97f47c-728c-4032-a49a-e9bcff250c4e-config\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.142082 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-config\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.142215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44552f3a-6fde-428f-a77a-111f43de4209-serving-cert\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.142331 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-proxy-ca-bundles\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.142464 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-client-ca\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.243650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8tjb\" (UniqueName: \"kubernetes.io/projected/44552f3a-6fde-428f-a77a-111f43de4209-kube-api-access-b8tjb\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244017 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e97f47c-728c-4032-a49a-e9bcff250c4e-serving-cert\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244123 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e97f47c-728c-4032-a49a-e9bcff250c4e-client-ca\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244235 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b4m4\" (UniqueName: \"kubernetes.io/projected/9e97f47c-728c-4032-a49a-e9bcff250c4e-kube-api-access-8b4m4\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e97f47c-728c-4032-a49a-e9bcff250c4e-config\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244419 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-config\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244547 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44552f3a-6fde-428f-a77a-111f43de4209-serving-cert\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244702 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-proxy-ca-bundles\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.244819 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-client-ca\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.245822 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-client-ca\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.245851 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e97f47c-728c-4032-a49a-e9bcff250c4e-client-ca\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.246599 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-config\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.246834 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/44552f3a-6fde-428f-a77a-111f43de4209-proxy-ca-bundles\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.246846 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e97f47c-728c-4032-a49a-e9bcff250c4e-config\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.253993 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e97f47c-728c-4032-a49a-e9bcff250c4e-serving-cert\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.260797 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44552f3a-6fde-428f-a77a-111f43de4209-serving-cert\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.265447 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b4m4\" (UniqueName: \"kubernetes.io/projected/9e97f47c-728c-4032-a49a-e9bcff250c4e-kube-api-access-8b4m4\") pod \"route-controller-manager-795f8c5d69-6d9nl\" (UID: \"9e97f47c-728c-4032-a49a-e9bcff250c4e\") " pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.273905 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8tjb\" (UniqueName: \"kubernetes.io/projected/44552f3a-6fde-428f-a77a-111f43de4209-kube-api-access-b8tjb\") pod \"controller-manager-5888f7cc49-42wwh\" (UID: \"44552f3a-6fde-428f-a77a-111f43de4209\") " pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.356381 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.362297 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.510395 4746 generic.go:334] "Generic (PLEG): container finished" podID="da150909-e111-42fc-aa44-a4e181c3e57a" containerID="d32845e7d48108314c6103529cb4fd3136bf6c20f5a3dc6c7a52be1326efd873" exitCode=0 Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.510818 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98p6v" event={"ID":"da150909-e111-42fc-aa44-a4e181c3e57a","Type":"ContainerDied","Data":"d32845e7d48108314c6103529cb4fd3136bf6c20f5a3dc6c7a52be1326efd873"} Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.514646 4746 generic.go:334] "Generic (PLEG): container finished" podID="d26e9e37-91e8-407b-b16d-2ce68fa11f2d" containerID="ec43c4b200a30cdd0286ad26b6614475bb36aba1e37cc8950569819bb64ebf84" exitCode=0 Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.514720 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjcnr" event={"ID":"d26e9e37-91e8-407b-b16d-2ce68fa11f2d","Type":"ContainerDied","Data":"ec43c4b200a30cdd0286ad26b6614475bb36aba1e37cc8950569819bb64ebf84"} Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.626881 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5888f7cc49-42wwh"] Jan 29 16:40:49 crc kubenswrapper[4746]: I0129 16:40:49.694557 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl"] Jan 29 16:40:49 crc kubenswrapper[4746]: W0129 16:40:49.704101 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e97f47c_728c_4032_a49a_e9bcff250c4e.slice/crio-f2e29e041de72a9eaaf13eee9e3f76bc9d7f22322354030cc26ea70739f63cb1 WatchSource:0}: Error finding container f2e29e041de72a9eaaf13eee9e3f76bc9d7f22322354030cc26ea70739f63cb1: Status 404 returned error can't find the container with id f2e29e041de72a9eaaf13eee9e3f76bc9d7f22322354030cc26ea70739f63cb1 Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.455647 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce3db6a-0d00-49ad-8feb-e46f901d05d2" path="/var/lib/kubelet/pods/0ce3db6a-0d00-49ad-8feb-e46f901d05d2/volumes" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.457078 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d919f0-90c7-4739-a2aa-f5e26679dc80" path="/var/lib/kubelet/pods/58d919f0-90c7-4739-a2aa-f5e26679dc80/volumes" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.521473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" event={"ID":"44552f3a-6fde-428f-a77a-111f43de4209","Type":"ContainerStarted","Data":"1050224f69c4a7d82a13fb8723b3d541438aeb7e60521730ba3d436b98d627c2"} Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.521559 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" event={"ID":"44552f3a-6fde-428f-a77a-111f43de4209","Type":"ContainerStarted","Data":"ba17889d4383dcdc54e979cbed363f5a9225bb04422a8214565a62a16e47e6ab"} Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.521776 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.528477 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" event={"ID":"9e97f47c-728c-4032-a49a-e9bcff250c4e","Type":"ContainerStarted","Data":"c124c276517c878d1afe97ac7b8c96ab44b7902c6f735e66d4a759876c18eca7"} Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.528538 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" event={"ID":"9e97f47c-728c-4032-a49a-e9bcff250c4e","Type":"ContainerStarted","Data":"f2e29e041de72a9eaaf13eee9e3f76bc9d7f22322354030cc26ea70739f63cb1"} Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.528714 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.529886 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.535178 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.547387 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5888f7cc49-42wwh" podStartSLOduration=3.54736804 podStartE2EDuration="3.54736804s" podCreationTimestamp="2026-01-29 16:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:50.544833408 +0000 UTC m=+372.945418052" watchObservedRunningTime="2026-01-29 16:40:50.54736804 +0000 UTC m=+372.947952684" Jan 29 16:40:50 crc kubenswrapper[4746]: I0129 16:40:50.597266 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-795f8c5d69-6d9nl" podStartSLOduration=3.597239425 podStartE2EDuration="3.597239425s" podCreationTimestamp="2026-01-29 16:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:50.591893724 +0000 UTC m=+372.992478388" watchObservedRunningTime="2026-01-29 16:40:50.597239425 +0000 UTC m=+372.997824069" Jan 29 16:40:51 crc kubenswrapper[4746]: I0129 16:40:51.536554 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98p6v" event={"ID":"da150909-e111-42fc-aa44-a4e181c3e57a","Type":"ContainerStarted","Data":"8ce6b160cfb2b85bf701182e86ca59f68c82835a285e02f70bf84796e17e5c22"} Jan 29 16:40:51 crc kubenswrapper[4746]: I0129 16:40:51.557755 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wjcnr" event={"ID":"d26e9e37-91e8-407b-b16d-2ce68fa11f2d","Type":"ContainerStarted","Data":"cb810236449174143bfa78c2f6872270f7183ada3d52d198c879e6ddc3b5bc56"} Jan 29 16:40:51 crc kubenswrapper[4746]: I0129 16:40:51.569037 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-98p6v" podStartSLOduration=2.217188331 podStartE2EDuration="5.569017174s" podCreationTimestamp="2026-01-29 16:40:46 +0000 UTC" firstStartedPulling="2026-01-29 16:40:47.464736686 +0000 UTC m=+369.865321340" lastFinishedPulling="2026-01-29 16:40:50.816565539 +0000 UTC m=+373.217150183" observedRunningTime="2026-01-29 16:40:51.564130705 +0000 UTC m=+373.964715359" watchObservedRunningTime="2026-01-29 16:40:51.569017174 +0000 UTC m=+373.969601828" Jan 29 16:40:51 crc kubenswrapper[4746]: I0129 16:40:51.604039 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wjcnr" podStartSLOduration=2.701224206 podStartE2EDuration="5.604017667s" podCreationTimestamp="2026-01-29 16:40:46 +0000 UTC" firstStartedPulling="2026-01-29 16:40:47.468531043 +0000 UTC m=+369.869115697" lastFinishedPulling="2026-01-29 16:40:50.371324514 +0000 UTC m=+372.771909158" observedRunningTime="2026-01-29 16:40:51.599928751 +0000 UTC m=+374.000513405" watchObservedRunningTime="2026-01-29 16:40:51.604017667 +0000 UTC m=+374.004602311" Jan 29 16:40:54 crc kubenswrapper[4746]: I0129 16:40:54.120940 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:54 crc kubenswrapper[4746]: I0129 16:40:54.121764 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:54 crc kubenswrapper[4746]: I0129 16:40:54.176619 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:54 crc kubenswrapper[4746]: I0129 16:40:54.297309 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:54 crc kubenswrapper[4746]: I0129 16:40:54.297367 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:40:54 crc kubenswrapper[4746]: I0129 16:40:54.616665 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfcrm" Jan 29 16:40:55 crc kubenswrapper[4746]: I0129 16:40:55.337229 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrwjj" podUID="a2a7e5df-24f7-4400-93be-0c812a827d15" containerName="registry-server" probeResult="failure" output=< Jan 29 16:40:55 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 29 16:40:55 crc kubenswrapper[4746]: > Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.526296 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.526456 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.576609 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.636346 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-98p6v" Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.704714 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.704777 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:56 crc kubenswrapper[4746]: I0129 16:40:56.756471 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:57 crc kubenswrapper[4746]: I0129 16:40:57.645607 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wjcnr" Jan 29 16:40:59 crc kubenswrapper[4746]: I0129 16:40:59.476911 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9p" Jan 29 16:40:59 crc kubenswrapper[4746]: I0129 16:40:59.538481 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t9srx"] Jan 29 16:41:04 crc kubenswrapper[4746]: I0129 16:41:04.335381 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:41:04 crc kubenswrapper[4746]: I0129 16:41:04.389831 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrwjj" Jan 29 16:41:19 crc kubenswrapper[4746]: I0129 16:41:19.065559 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:41:19 crc kubenswrapper[4746]: I0129 16:41:19.066305 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:24 crc kubenswrapper[4746]: I0129 16:41:24.577968 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" podUID="27c3b17b-1acd-412d-90eb-5782d6db606e" containerName="registry" containerID="cri-o://f44f051a3c891dca641f95b919e060c47aabc03caa0c67279c2d91cf8d6d1364" gracePeriod=30 Jan 29 16:41:24 crc kubenswrapper[4746]: I0129 16:41:24.787548 4746 generic.go:334] "Generic (PLEG): container finished" podID="27c3b17b-1acd-412d-90eb-5782d6db606e" containerID="f44f051a3c891dca641f95b919e060c47aabc03caa0c67279c2d91cf8d6d1364" exitCode=0 Jan 29 16:41:24 crc kubenswrapper[4746]: I0129 16:41:24.787653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" event={"ID":"27c3b17b-1acd-412d-90eb-5782d6db606e","Type":"ContainerDied","Data":"f44f051a3c891dca641f95b919e060c47aabc03caa0c67279c2d91cf8d6d1364"} Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.086870 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.194973 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-trusted-ca\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195050 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-tls\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27c3b17b-1acd-412d-90eb-5782d6db606e-ca-trust-extracted\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-certificates\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195247 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwtg8\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-kube-api-access-pwtg8\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195538 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195612 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27c3b17b-1acd-412d-90eb-5782d6db606e-installation-pull-secrets\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.195691 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-bound-sa-token\") pod \"27c3b17b-1acd-412d-90eb-5782d6db606e\" (UID: \"27c3b17b-1acd-412d-90eb-5782d6db606e\") " Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.196292 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.196411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.203952 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27c3b17b-1acd-412d-90eb-5782d6db606e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.204418 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-kube-api-access-pwtg8" (OuterVolumeSpecName: "kube-api-access-pwtg8") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "kube-api-access-pwtg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.204652 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.205645 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.207481 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.216986 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c3b17b-1acd-412d-90eb-5782d6db606e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "27c3b17b-1acd-412d-90eb-5782d6db606e" (UID: "27c3b17b-1acd-412d-90eb-5782d6db606e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297429 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwtg8\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-kube-api-access-pwtg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297490 4746 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/27c3b17b-1acd-412d-90eb-5782d6db606e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297509 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297525 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297540 4746 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297557 4746 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/27c3b17b-1acd-412d-90eb-5782d6db606e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.297569 4746 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/27c3b17b-1acd-412d-90eb-5782d6db606e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.796608 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" event={"ID":"27c3b17b-1acd-412d-90eb-5782d6db606e","Type":"ContainerDied","Data":"24c3b9155ea2eb095c0be8a748bf66a9d646a012065c7e037721fd88658dbe57"} Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.796721 4746 scope.go:117] "RemoveContainer" containerID="f44f051a3c891dca641f95b919e060c47aabc03caa0c67279c2d91cf8d6d1364" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.796735 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-t9srx" Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.829450 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t9srx"] Jan 29 16:41:25 crc kubenswrapper[4746]: I0129 16:41:25.833623 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-t9srx"] Jan 29 16:41:26 crc kubenswrapper[4746]: I0129 16:41:26.456293 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c3b17b-1acd-412d-90eb-5782d6db606e" path="/var/lib/kubelet/pods/27c3b17b-1acd-412d-90eb-5782d6db606e/volumes" Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.065616 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.066408 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.066501 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.067616 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"187ccb1bc8cb9fe0656edc934ee5b75e1344cc20ec1b0499ab1a774a533f9c67"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.067780 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://187ccb1bc8cb9fe0656edc934ee5b75e1344cc20ec1b0499ab1a774a533f9c67" gracePeriod=600 Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.968612 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="187ccb1bc8cb9fe0656edc934ee5b75e1344cc20ec1b0499ab1a774a533f9c67" exitCode=0 Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.968692 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"187ccb1bc8cb9fe0656edc934ee5b75e1344cc20ec1b0499ab1a774a533f9c67"} Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.969145 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"957c1c929436717125e8117e9ebc40a2b87de10bdf50d5e529bb3e048b7bfc97"} Jan 29 16:41:49 crc kubenswrapper[4746]: I0129 16:41:49.969237 4746 scope.go:117] "RemoveContainer" containerID="2e1a125741d75ae225be28ad9c620a8c9af00aa21b360bb243e639d1e9786b5f" Jan 29 16:43:49 crc kubenswrapper[4746]: I0129 16:43:49.065637 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:43:49 crc kubenswrapper[4746]: I0129 16:43:49.067387 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:19 crc kubenswrapper[4746]: I0129 16:44:19.065175 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:44:19 crc kubenswrapper[4746]: I0129 16:44:19.066058 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:49 crc kubenswrapper[4746]: I0129 16:44:49.065794 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:44:49 crc kubenswrapper[4746]: I0129 16:44:49.066695 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:49 crc kubenswrapper[4746]: I0129 16:44:49.066774 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:44:49 crc kubenswrapper[4746]: I0129 16:44:49.067763 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"957c1c929436717125e8117e9ebc40a2b87de10bdf50d5e529bb3e048b7bfc97"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:44:49 crc kubenswrapper[4746]: I0129 16:44:49.067845 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://957c1c929436717125e8117e9ebc40a2b87de10bdf50d5e529bb3e048b7bfc97" gracePeriod=600 Jan 29 16:44:50 crc kubenswrapper[4746]: I0129 16:44:50.161785 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="957c1c929436717125e8117e9ebc40a2b87de10bdf50d5e529bb3e048b7bfc97" exitCode=0 Jan 29 16:44:50 crc kubenswrapper[4746]: I0129 16:44:50.161847 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"957c1c929436717125e8117e9ebc40a2b87de10bdf50d5e529bb3e048b7bfc97"} Jan 29 16:44:50 crc kubenswrapper[4746]: I0129 16:44:50.162416 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"f56c479e12434b65f3040982c4c1ac3c63cd76a5e1a9e343b095f96d828b1ae6"} Jan 29 16:44:50 crc kubenswrapper[4746]: I0129 16:44:50.162454 4746 scope.go:117] "RemoveContainer" containerID="187ccb1bc8cb9fe0656edc934ee5b75e1344cc20ec1b0499ab1a774a533f9c67" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.168693 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp"] Jan 29 16:45:00 crc kubenswrapper[4746]: E0129 16:45:00.169693 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c3b17b-1acd-412d-90eb-5782d6db606e" containerName="registry" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.169715 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c3b17b-1acd-412d-90eb-5782d6db606e" containerName="registry" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.169841 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c3b17b-1acd-412d-90eb-5782d6db606e" containerName="registry" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.170554 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.172438 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.172799 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.179269 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp"] Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.202366 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02243086-76e8-4d02-98ce-a7cb0921996a-secret-volume\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.202505 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02243086-76e8-4d02-98ce-a7cb0921996a-config-volume\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.202548 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27n49\" (UniqueName: \"kubernetes.io/projected/02243086-76e8-4d02-98ce-a7cb0921996a-kube-api-access-27n49\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.303325 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02243086-76e8-4d02-98ce-a7cb0921996a-config-volume\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.304449 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27n49\" (UniqueName: \"kubernetes.io/projected/02243086-76e8-4d02-98ce-a7cb0921996a-kube-api-access-27n49\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.304550 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02243086-76e8-4d02-98ce-a7cb0921996a-secret-volume\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.304967 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02243086-76e8-4d02-98ce-a7cb0921996a-config-volume\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.312461 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02243086-76e8-4d02-98ce-a7cb0921996a-secret-volume\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.326079 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27n49\" (UniqueName: \"kubernetes.io/projected/02243086-76e8-4d02-98ce-a7cb0921996a-kube-api-access-27n49\") pod \"collect-profiles-29495085-pxvdp\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.490914 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:00 crc kubenswrapper[4746]: I0129 16:45:00.699976 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp"] Jan 29 16:45:01 crc kubenswrapper[4746]: I0129 16:45:01.251455 4746 generic.go:334] "Generic (PLEG): container finished" podID="02243086-76e8-4d02-98ce-a7cb0921996a" containerID="68d5813546e1aff5f8bd8fd1e75c6fb63f8f6788b13eb25d81eda039fa8d65e3" exitCode=0 Jan 29 16:45:01 crc kubenswrapper[4746]: I0129 16:45:01.251595 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" event={"ID":"02243086-76e8-4d02-98ce-a7cb0921996a","Type":"ContainerDied","Data":"68d5813546e1aff5f8bd8fd1e75c6fb63f8f6788b13eb25d81eda039fa8d65e3"} Jan 29 16:45:01 crc kubenswrapper[4746]: I0129 16:45:01.251933 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" event={"ID":"02243086-76e8-4d02-98ce-a7cb0921996a","Type":"ContainerStarted","Data":"2907780ea33f3221147bda3a55a4ec205e462c38589868cc7e40e4603256697f"} Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.535223 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.733030 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02243086-76e8-4d02-98ce-a7cb0921996a-config-volume\") pod \"02243086-76e8-4d02-98ce-a7cb0921996a\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.733587 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02243086-76e8-4d02-98ce-a7cb0921996a-secret-volume\") pod \"02243086-76e8-4d02-98ce-a7cb0921996a\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.733632 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27n49\" (UniqueName: \"kubernetes.io/projected/02243086-76e8-4d02-98ce-a7cb0921996a-kube-api-access-27n49\") pod \"02243086-76e8-4d02-98ce-a7cb0921996a\" (UID: \"02243086-76e8-4d02-98ce-a7cb0921996a\") " Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.734172 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02243086-76e8-4d02-98ce-a7cb0921996a-config-volume" (OuterVolumeSpecName: "config-volume") pod "02243086-76e8-4d02-98ce-a7cb0921996a" (UID: "02243086-76e8-4d02-98ce-a7cb0921996a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.740224 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02243086-76e8-4d02-98ce-a7cb0921996a-kube-api-access-27n49" (OuterVolumeSpecName: "kube-api-access-27n49") pod "02243086-76e8-4d02-98ce-a7cb0921996a" (UID: "02243086-76e8-4d02-98ce-a7cb0921996a"). InnerVolumeSpecName "kube-api-access-27n49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.740725 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02243086-76e8-4d02-98ce-a7cb0921996a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02243086-76e8-4d02-98ce-a7cb0921996a" (UID: "02243086-76e8-4d02-98ce-a7cb0921996a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.835315 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27n49\" (UniqueName: \"kubernetes.io/projected/02243086-76e8-4d02-98ce-a7cb0921996a-kube-api-access-27n49\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.835380 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02243086-76e8-4d02-98ce-a7cb0921996a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:02 crc kubenswrapper[4746]: I0129 16:45:02.835399 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02243086-76e8-4d02-98ce-a7cb0921996a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4746]: I0129 16:45:03.268691 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" event={"ID":"02243086-76e8-4d02-98ce-a7cb0921996a","Type":"ContainerDied","Data":"2907780ea33f3221147bda3a55a4ec205e462c38589868cc7e40e4603256697f"} Jan 29 16:45:03 crc kubenswrapper[4746]: I0129 16:45:03.268772 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2907780ea33f3221147bda3a55a4ec205e462c38589868cc7e40e4603256697f" Jan 29 16:45:03 crc kubenswrapper[4746]: I0129 16:45:03.268790 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp" Jan 29 16:46:49 crc kubenswrapper[4746]: I0129 16:46:49.065944 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:46:49 crc kubenswrapper[4746]: I0129 16:46:49.067011 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:15 crc kubenswrapper[4746]: I0129 16:47:15.041758 4746 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:47:19 crc kubenswrapper[4746]: I0129 16:47:19.066033 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:19 crc kubenswrapper[4746]: I0129 16:47:19.066493 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.430342 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdwxv"] Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.431541 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-controller" containerID="cri-o://cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.432059 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="sbdb" containerID="cri-o://e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.432129 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="nbdb" containerID="cri-o://1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.432221 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="northd" containerID="cri-o://03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.432286 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.432344 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-node" containerID="cri-o://45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.432401 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-acl-logging" containerID="cri-o://515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.475667 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" containerID="cri-o://fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" gracePeriod=30 Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.746541 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/3.log" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.750839 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovn-acl-logging/0.log" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.751506 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovn-controller/0.log" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.752679 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.822879 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zzrcf"] Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823136 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="nbdb" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823148 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="nbdb" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823157 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-node" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823164 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-node" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823222 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02243086-76e8-4d02-98ce-a7cb0921996a" containerName="collect-profiles" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823228 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="02243086-76e8-4d02-98ce-a7cb0921996a" containerName="collect-profiles" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823254 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823261 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823270 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823275 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823290 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="sbdb" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823295 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="sbdb" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823302 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823308 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823334 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-acl-logging" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823340 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-acl-logging" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823349 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823355 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823363 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823368 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823377 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="northd" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823383 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="northd" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823410 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kubecfg-setup" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823417 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kubecfg-setup" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823425 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823529 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-acl-logging" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823539 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="northd" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823558 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="02243086-76e8-4d02-98ce-a7cb0921996a" containerName="collect-profiles" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823568 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovn-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823573 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-node" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823579 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823585 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="sbdb" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823592 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823600 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="nbdb" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823606 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823612 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823619 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823626 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: E0129 16:47:26.823705 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.823712 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerName="ovnkube-controller" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.825218 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874211 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-systemd\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874297 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-slash\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874366 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-slash" (OuterVolumeSpecName: "host-slash") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874410 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-etc-openvswitch\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874465 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874799 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50599064-6fa5-43ed-9c1d-a58b3180d421-ovn-node-metrics-cert\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.874920 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-ovn-kubernetes\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875038 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875231 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-log-socket" (OuterVolumeSpecName: "log-socket") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875339 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-log-socket\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875455 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-netns\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875587 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-kubelet\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875683 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-bin\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875938 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-script-lib\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876071 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-netd\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-env-overrides\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876292 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-ovn\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875500 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875625 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876391 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.875921 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876116 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876426 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876529 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.876706 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877251 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-openvswitch\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877346 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-node-log\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877443 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-var-lib-openvswitch\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877546 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht6sv\" (UniqueName: \"kubernetes.io/projected/50599064-6fa5-43ed-9c1d-a58b3180d421-kube-api-access-ht6sv\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877644 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-var-lib-cni-networks-ovn-kubernetes\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877745 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-systemd-units\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877861 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-config\") pod \"50599064-6fa5-43ed-9c1d-a58b3180d421\" (UID: \"50599064-6fa5-43ed-9c1d-a58b3180d421\") " Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877476 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877489 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-node-log" (OuterVolumeSpecName: "node-log") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877717 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.877811 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.878371 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.878570 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovnkube-script-lib\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.878687 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.878799 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-log-socket\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.878932 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-systemd\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879063 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovn-node-metrics-cert\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879156 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-kubelet\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879304 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-node-log\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879398 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-slash\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879543 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-cni-bin\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879603 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-var-lib-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-etc-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879661 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-env-overrides\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879686 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovnkube-config\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-run-netns\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879759 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-kube-api-access-pm925\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879792 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-ovn\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879828 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879857 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-cni-netd\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.879903 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880036 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-systemd-units\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880221 4746 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880251 4746 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880267 4746 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880292 4746 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880304 4746 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880316 4746 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880326 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880336 4746 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880344 4746 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880351 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880359 4746 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880367 4746 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880377 4746 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880386 4746 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880395 4746 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880404 4746 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.880415 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50599064-6fa5-43ed-9c1d-a58b3180d421-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.881063 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50599064-6fa5-43ed-9c1d-a58b3180d421-kube-api-access-ht6sv" (OuterVolumeSpecName: "kube-api-access-ht6sv") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "kube-api-access-ht6sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.881225 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50599064-6fa5-43ed-9c1d-a58b3180d421-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.887405 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "50599064-6fa5-43ed-9c1d-a58b3180d421" (UID: "50599064-6fa5-43ed-9c1d-a58b3180d421"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981611 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovnkube-script-lib\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981689 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981730 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-log-socket\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981783 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-systemd\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981836 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981927 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-log-socket\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981935 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-systemd\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.981847 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovn-node-metrics-cert\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982119 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-kubelet\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982258 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-node-log\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-kubelet\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982305 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-slash\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982345 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-cni-bin\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982351 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-node-log\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982384 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-var-lib-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982415 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-slash\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-etc-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982444 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-var-lib-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982448 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-etc-openvswitch\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982422 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-cni-bin\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982454 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-env-overrides\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982514 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovnkube-config\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-run-netns\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982573 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-kube-api-access-pm925\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982589 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-run-netns\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982617 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-ovn\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982597 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-run-ovn\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982658 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982679 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-cni-netd\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982717 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982763 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-systemd-units\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982860 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50599064-6fa5-43ed-9c1d-a58b3180d421-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982852 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-cni-netd\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982890 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-systemd-units\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982874 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht6sv\" (UniqueName: \"kubernetes.io/projected/50599064-6fa5-43ed-9c1d-a58b3180d421-kube-api-access-ht6sv\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982916 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.982944 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.983259 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovnkube-config\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.983376 4746 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50599064-6fa5-43ed-9c1d-a58b3180d421-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.983524 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-env-overrides\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.984080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovnkube-script-lib\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:26 crc kubenswrapper[4746]: I0129 16:47:26.986685 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-ovn-node-metrics-cert\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.011424 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm925\" (UniqueName: \"kubernetes.io/projected/c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5-kube-api-access-pm925\") pod \"ovnkube-node-zzrcf\" (UID: \"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.142461 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.176167 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovnkube-controller/3.log" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.179305 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovn-acl-logging/0.log" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.179895 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdwxv_50599064-6fa5-43ed-9c1d-a58b3180d421/ovn-controller/0.log" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182131 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" exitCode=0 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182348 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" exitCode=0 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182477 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" exitCode=0 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182594 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" exitCode=0 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182769 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" exitCode=0 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182928 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" exitCode=0 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183046 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" exitCode=143 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183169 4746 generic.go:334] "Generic (PLEG): container finished" podID="50599064-6fa5-43ed-9c1d-a58b3180d421" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" exitCode=143 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182236 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.182245 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183451 4746 scope.go:117] "RemoveContainer" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183432 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183612 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183675 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183698 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183866 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183885 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183898 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183910 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183922 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183938 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183954 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183970 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.183986 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184009 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184038 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184058 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184075 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184092 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184110 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184127 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184143 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184159 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184174 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184240 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184320 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184383 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184400 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184413 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184424 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184436 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184448 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184459 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184557 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184648 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184668 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184742 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdwxv" event={"ID":"50599064-6fa5-43ed-9c1d-a58b3180d421","Type":"ContainerDied","Data":"d6f2202770ecc2bfb85b955b220c7b4a0e3109b0524801396ec52bb96a1c1141"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184777 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184867 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184887 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184902 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184965 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184983 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.184997 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.185061 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.185083 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.185099 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.192956 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/2.log" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.194113 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/1.log" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.194154 4746 generic.go:334] "Generic (PLEG): container finished" podID="017d8376-e00b-442b-ac6b-b2189ff75132" containerID="d5ef49e5ef0c78740093a11d20b861a3b623803368308cfc198a4d068e879da9" exitCode=2 Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.194197 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerDied","Data":"d5ef49e5ef0c78740093a11d20b861a3b623803368308cfc198a4d068e879da9"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.194220 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10"} Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.194768 4746 scope.go:117] "RemoveContainer" containerID="d5ef49e5ef0c78740093a11d20b861a3b623803368308cfc198a4d068e879da9" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.225955 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.249746 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdwxv"] Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.249847 4746 scope.go:117] "RemoveContainer" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.254992 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdwxv"] Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.284691 4746 scope.go:117] "RemoveContainer" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.304169 4746 scope.go:117] "RemoveContainer" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.320625 4746 scope.go:117] "RemoveContainer" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.338091 4746 scope.go:117] "RemoveContainer" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.393447 4746 scope.go:117] "RemoveContainer" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.409699 4746 scope.go:117] "RemoveContainer" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.430810 4746 scope.go:117] "RemoveContainer" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.459368 4746 scope.go:117] "RemoveContainer" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.463492 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": container with ID starting with fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311 not found: ID does not exist" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.463550 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} err="failed to get container status \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": rpc error: code = NotFound desc = could not find container \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": container with ID starting with fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.463589 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.464284 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": container with ID starting with 8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7 not found: ID does not exist" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.464319 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} err="failed to get container status \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": rpc error: code = NotFound desc = could not find container \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": container with ID starting with 8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.464339 4746 scope.go:117] "RemoveContainer" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.464854 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": container with ID starting with e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257 not found: ID does not exist" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.464889 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} err="failed to get container status \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": rpc error: code = NotFound desc = could not find container \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": container with ID starting with e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.464916 4746 scope.go:117] "RemoveContainer" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.465627 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": container with ID starting with 1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86 not found: ID does not exist" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.465656 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} err="failed to get container status \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": rpc error: code = NotFound desc = could not find container \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": container with ID starting with 1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.465672 4746 scope.go:117] "RemoveContainer" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.466062 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": container with ID starting with 03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4 not found: ID does not exist" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.466095 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} err="failed to get container status \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": rpc error: code = NotFound desc = could not find container \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": container with ID starting with 03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.466123 4746 scope.go:117] "RemoveContainer" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.466800 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": container with ID starting with c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f not found: ID does not exist" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.466830 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} err="failed to get container status \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": rpc error: code = NotFound desc = could not find container \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": container with ID starting with c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.466858 4746 scope.go:117] "RemoveContainer" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.467440 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": container with ID starting with 45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8 not found: ID does not exist" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.467472 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} err="failed to get container status \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": rpc error: code = NotFound desc = could not find container \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": container with ID starting with 45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.467529 4746 scope.go:117] "RemoveContainer" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.469129 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": container with ID starting with 515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be not found: ID does not exist" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.469158 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} err="failed to get container status \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": rpc error: code = NotFound desc = could not find container \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": container with ID starting with 515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.469171 4746 scope.go:117] "RemoveContainer" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.469576 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": container with ID starting with cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9 not found: ID does not exist" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.469599 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} err="failed to get container status \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": rpc error: code = NotFound desc = could not find container \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": container with ID starting with cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.469614 4746 scope.go:117] "RemoveContainer" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" Jan 29 16:47:27 crc kubenswrapper[4746]: E0129 16:47:27.470265 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": container with ID starting with 19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d not found: ID does not exist" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.470323 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} err="failed to get container status \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": rpc error: code = NotFound desc = could not find container \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": container with ID starting with 19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.470360 4746 scope.go:117] "RemoveContainer" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.470880 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} err="failed to get container status \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": rpc error: code = NotFound desc = could not find container \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": container with ID starting with fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.470919 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.471336 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} err="failed to get container status \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": rpc error: code = NotFound desc = could not find container \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": container with ID starting with 8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.471369 4746 scope.go:117] "RemoveContainer" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.472453 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} err="failed to get container status \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": rpc error: code = NotFound desc = could not find container \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": container with ID starting with e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.472487 4746 scope.go:117] "RemoveContainer" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.473111 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} err="failed to get container status \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": rpc error: code = NotFound desc = could not find container \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": container with ID starting with 1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.473139 4746 scope.go:117] "RemoveContainer" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.473571 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} err="failed to get container status \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": rpc error: code = NotFound desc = could not find container \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": container with ID starting with 03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.473593 4746 scope.go:117] "RemoveContainer" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.474038 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} err="failed to get container status \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": rpc error: code = NotFound desc = could not find container \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": container with ID starting with c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.474074 4746 scope.go:117] "RemoveContainer" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.474451 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} err="failed to get container status \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": rpc error: code = NotFound desc = could not find container \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": container with ID starting with 45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.474477 4746 scope.go:117] "RemoveContainer" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.474866 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} err="failed to get container status \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": rpc error: code = NotFound desc = could not find container \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": container with ID starting with 515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.474901 4746 scope.go:117] "RemoveContainer" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.475311 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} err="failed to get container status \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": rpc error: code = NotFound desc = could not find container \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": container with ID starting with cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.475346 4746 scope.go:117] "RemoveContainer" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.475668 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} err="failed to get container status \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": rpc error: code = NotFound desc = could not find container \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": container with ID starting with 19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.475700 4746 scope.go:117] "RemoveContainer" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.476025 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} err="failed to get container status \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": rpc error: code = NotFound desc = could not find container \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": container with ID starting with fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.476073 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.476485 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} err="failed to get container status \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": rpc error: code = NotFound desc = could not find container \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": container with ID starting with 8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.476519 4746 scope.go:117] "RemoveContainer" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.476821 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} err="failed to get container status \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": rpc error: code = NotFound desc = could not find container \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": container with ID starting with e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.476856 4746 scope.go:117] "RemoveContainer" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.477222 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} err="failed to get container status \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": rpc error: code = NotFound desc = could not find container \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": container with ID starting with 1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.477245 4746 scope.go:117] "RemoveContainer" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.477572 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} err="failed to get container status \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": rpc error: code = NotFound desc = could not find container \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": container with ID starting with 03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.477607 4746 scope.go:117] "RemoveContainer" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.478085 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} err="failed to get container status \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": rpc error: code = NotFound desc = could not find container \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": container with ID starting with c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.478107 4746 scope.go:117] "RemoveContainer" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.478538 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} err="failed to get container status \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": rpc error: code = NotFound desc = could not find container \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": container with ID starting with 45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.478555 4746 scope.go:117] "RemoveContainer" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.478833 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} err="failed to get container status \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": rpc error: code = NotFound desc = could not find container \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": container with ID starting with 515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.478871 4746 scope.go:117] "RemoveContainer" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.479321 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} err="failed to get container status \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": rpc error: code = NotFound desc = could not find container \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": container with ID starting with cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.479352 4746 scope.go:117] "RemoveContainer" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.479667 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} err="failed to get container status \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": rpc error: code = NotFound desc = could not find container \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": container with ID starting with 19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.479690 4746 scope.go:117] "RemoveContainer" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.480104 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} err="failed to get container status \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": rpc error: code = NotFound desc = could not find container \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": container with ID starting with fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.480136 4746 scope.go:117] "RemoveContainer" containerID="8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.480524 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7"} err="failed to get container status \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": rpc error: code = NotFound desc = could not find container \"8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7\": container with ID starting with 8e10395a1f5371ed5d5e4038d5df90a5066902b0355cc62a16489616073a94f7 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.480557 4746 scope.go:117] "RemoveContainer" containerID="e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.480875 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257"} err="failed to get container status \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": rpc error: code = NotFound desc = could not find container \"e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257\": container with ID starting with e29c5316f5fcc795c7d854b93e9e96238f59d56eb4025fafe15d44933517f257 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.480915 4746 scope.go:117] "RemoveContainer" containerID="1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.481403 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86"} err="failed to get container status \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": rpc error: code = NotFound desc = could not find container \"1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86\": container with ID starting with 1a7a524a9f49188f94486c9fb9eaafe82ef615772412ed679e4608fd25a1dd86 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.481435 4746 scope.go:117] "RemoveContainer" containerID="03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.481736 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4"} err="failed to get container status \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": rpc error: code = NotFound desc = could not find container \"03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4\": container with ID starting with 03a655ae77ff85039fb6923f6ed6d01a81e32ba3a5f7d9d9dc8aeaaafb26d2c4 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.481756 4746 scope.go:117] "RemoveContainer" containerID="c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.482847 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f"} err="failed to get container status \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": rpc error: code = NotFound desc = could not find container \"c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f\": container with ID starting with c534f7fc794ffa9385b4bdbe0d8ba49d8a04c7b66a407080db2a92771380813f not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.482886 4746 scope.go:117] "RemoveContainer" containerID="45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.483442 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8"} err="failed to get container status \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": rpc error: code = NotFound desc = could not find container \"45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8\": container with ID starting with 45814fc8912c8351e87979a1a39b3ee987018935cf1b31ad58ddf452719cdcd8 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.483470 4746 scope.go:117] "RemoveContainer" containerID="515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.484133 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be"} err="failed to get container status \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": rpc error: code = NotFound desc = could not find container \"515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be\": container with ID starting with 515659ff9b83eee2b7921e9ef594bf1f53fd27a50b1ba4695340a0e4f25615be not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.484161 4746 scope.go:117] "RemoveContainer" containerID="cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.485334 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9"} err="failed to get container status \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": rpc error: code = NotFound desc = could not find container \"cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9\": container with ID starting with cce3cd840a0e8f747b33676b9e571351b8a42517002f0b2fc8e9f61c2eb7fdf9 not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.485453 4746 scope.go:117] "RemoveContainer" containerID="19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.486623 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d"} err="failed to get container status \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": rpc error: code = NotFound desc = could not find container \"19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d\": container with ID starting with 19d3dbc79ab53da57f71e3a9b32eb7873e732bc0b1b4dfbae3afdedf92bfbc1d not found: ID does not exist" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.486694 4746 scope.go:117] "RemoveContainer" containerID="fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311" Jan 29 16:47:27 crc kubenswrapper[4746]: I0129 16:47:27.487662 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311"} err="failed to get container status \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": rpc error: code = NotFound desc = could not find container \"fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311\": container with ID starting with fe19b07e1cbffe72d7092fd5eb15fef83cb86e60a021e67171dbf558f0beb311 not found: ID does not exist" Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.205827 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/2.log" Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.207127 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/1.log" Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.207351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-74h7n" event={"ID":"017d8376-e00b-442b-ac6b-b2189ff75132","Type":"ContainerStarted","Data":"9f6255bb2838bb528d400e38979b104ad8a3290d40cb93006040c17056bd672b"} Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.210575 4746 generic.go:334] "Generic (PLEG): container finished" podID="c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5" containerID="0908b0749316520fb1cc9605414192a5c29bad888bb981ba6e6bf1cdacfbca96" exitCode=0 Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.210666 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerDied","Data":"0908b0749316520fb1cc9605414192a5c29bad888bb981ba6e6bf1cdacfbca96"} Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.210723 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"f4c7472fd44f8c6c6fed7ba2ac813fd9dbf48056da7911397d5ca33383f09409"} Jan 29 16:47:28 crc kubenswrapper[4746]: I0129 16:47:28.454781 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50599064-6fa5-43ed-9c1d-a58b3180d421" path="/var/lib/kubelet/pods/50599064-6fa5-43ed-9c1d-a58b3180d421/volumes" Jan 29 16:47:29 crc kubenswrapper[4746]: I0129 16:47:29.226738 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"2398a95087822f84f6034fc66da9c0a211bc5118fd680e3e5b6d8a03532b4f89"} Jan 29 16:47:29 crc kubenswrapper[4746]: I0129 16:47:29.227426 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"4f8647919ba21bc5ea8dc7a8e1dd932262924c8169ec7d61b11936605a186c59"} Jan 29 16:47:29 crc kubenswrapper[4746]: I0129 16:47:29.227448 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"f0b68533b2af6e07631c12895fa2c5c847309ed6474dd731212086a9252f67db"} Jan 29 16:47:29 crc kubenswrapper[4746]: I0129 16:47:29.227468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"2867e815cc86f69f603cfb6705152b7e8423758cc9599d0f2b121c678beb0345"} Jan 29 16:47:29 crc kubenswrapper[4746]: I0129 16:47:29.227485 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"47e5f229fdb46b213febbc3dbb7695a12eeb997b41522afee6bd264b0bd5ac02"} Jan 29 16:47:29 crc kubenswrapper[4746]: I0129 16:47:29.227500 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"d0c9f8afc1b14e92772c4428f94253597d08f2f37b6a764d1b6e9649523f78e3"} Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.062271 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2dmb7"] Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.063903 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.066560 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.068615 4746 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-7hthz" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.069268 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.069445 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.156409 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrhp\" (UniqueName: \"kubernetes.io/projected/6c1f4d6c-4612-4843-b70e-dd016136b6dd-kube-api-access-wwrhp\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.156518 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6c1f4d6c-4612-4843-b70e-dd016136b6dd-crc-storage\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.156561 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6c1f4d6c-4612-4843-b70e-dd016136b6dd-node-mnt\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.254005 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"4d50749bb739be9b9e705f37cdfc97ba037257ec3f036b9987f152b63eae4a8b"} Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.257789 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrhp\" (UniqueName: \"kubernetes.io/projected/6c1f4d6c-4612-4843-b70e-dd016136b6dd-kube-api-access-wwrhp\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.257839 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6c1f4d6c-4612-4843-b70e-dd016136b6dd-crc-storage\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.257877 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6c1f4d6c-4612-4843-b70e-dd016136b6dd-node-mnt\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.258280 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6c1f4d6c-4612-4843-b70e-dd016136b6dd-node-mnt\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.259717 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6c1f4d6c-4612-4843-b70e-dd016136b6dd-crc-storage\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.281166 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrhp\" (UniqueName: \"kubernetes.io/projected/6c1f4d6c-4612-4843-b70e-dd016136b6dd-kube-api-access-wwrhp\") pod \"crc-storage-crc-2dmb7\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: I0129 16:47:32.379803 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: E0129 16:47:32.414812 4746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(d1479b52acff04eb1b848ae64bafc40326bd6e609da71955cecaa1ef25f7fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:47:32 crc kubenswrapper[4746]: E0129 16:47:32.414894 4746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(d1479b52acff04eb1b848ae64bafc40326bd6e609da71955cecaa1ef25f7fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: E0129 16:47:32.414925 4746 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(d1479b52acff04eb1b848ae64bafc40326bd6e609da71955cecaa1ef25f7fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:32 crc kubenswrapper[4746]: E0129 16:47:32.414974 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2dmb7_crc-storage(6c1f4d6c-4612-4843-b70e-dd016136b6dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2dmb7_crc-storage(6c1f4d6c-4612-4843-b70e-dd016136b6dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(d1479b52acff04eb1b848ae64bafc40326bd6e609da71955cecaa1ef25f7fb6f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2dmb7" podUID="6c1f4d6c-4612-4843-b70e-dd016136b6dd" Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.273733 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" event={"ID":"c970d30f-c5e0-42a8-b7d7-e2c3bf7618e5","Type":"ContainerStarted","Data":"296b76957f65d78d0394c08966ca47e1e8434ae91553b1dfb009be7e35c0c973"} Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.274050 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.274064 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.274216 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.313945 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.318282 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" podStartSLOduration=8.318249971 podStartE2EDuration="8.318249971s" podCreationTimestamp="2026-01-29 16:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:47:34.315815633 +0000 UTC m=+776.716400327" watchObservedRunningTime="2026-01-29 16:47:34.318249971 +0000 UTC m=+776.718834625" Jan 29 16:47:34 crc kubenswrapper[4746]: I0129 16:47:34.320813 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:47:35 crc kubenswrapper[4746]: I0129 16:47:35.883285 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2dmb7"] Jan 29 16:47:35 crc kubenswrapper[4746]: I0129 16:47:35.883587 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:35 crc kubenswrapper[4746]: I0129 16:47:35.884548 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:35 crc kubenswrapper[4746]: E0129 16:47:35.919130 4746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(6b5d58ccc5ba482de81728b04797c18f42cbe33c021c0a8c653b746abec7d90b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:47:35 crc kubenswrapper[4746]: E0129 16:47:35.919255 4746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(6b5d58ccc5ba482de81728b04797c18f42cbe33c021c0a8c653b746abec7d90b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:35 crc kubenswrapper[4746]: E0129 16:47:35.919291 4746 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(6b5d58ccc5ba482de81728b04797c18f42cbe33c021c0a8c653b746abec7d90b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:35 crc kubenswrapper[4746]: E0129 16:47:35.919375 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2dmb7_crc-storage(6c1f4d6c-4612-4843-b70e-dd016136b6dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2dmb7_crc-storage(6c1f4d6c-4612-4843-b70e-dd016136b6dd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2dmb7_crc-storage_6c1f4d6c-4612-4843-b70e-dd016136b6dd_0(6b5d58ccc5ba482de81728b04797c18f42cbe33c021c0a8c653b746abec7d90b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2dmb7" podUID="6c1f4d6c-4612-4843-b70e-dd016136b6dd" Jan 29 16:47:38 crc kubenswrapper[4746]: I0129 16:47:38.745292 4746 scope.go:117] "RemoveContainer" containerID="9031662dc0755d9384e39ba9022dc7c024bb83d7703d06346db655574211fc10" Jan 29 16:47:39 crc kubenswrapper[4746]: I0129 16:47:39.310130 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-74h7n_017d8376-e00b-442b-ac6b-b2189ff75132/kube-multus/2.log" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.064818 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.065523 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.065594 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.066456 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f56c479e12434b65f3040982c4c1ac3c63cd76a5e1a9e343b095f96d828b1ae6"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.066583 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://f56c479e12434b65f3040982c4c1ac3c63cd76a5e1a9e343b095f96d828b1ae6" gracePeriod=600 Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.373527 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="f56c479e12434b65f3040982c4c1ac3c63cd76a5e1a9e343b095f96d828b1ae6" exitCode=0 Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.373574 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"f56c479e12434b65f3040982c4c1ac3c63cd76a5e1a9e343b095f96d828b1ae6"} Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.373866 4746 scope.go:117] "RemoveContainer" containerID="957c1c929436717125e8117e9ebc40a2b87de10bdf50d5e529bb3e048b7bfc97" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.445031 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.445920 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.637416 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2dmb7"] Jan 29 16:47:49 crc kubenswrapper[4746]: W0129 16:47:49.651915 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c1f4d6c_4612_4843_b70e_dd016136b6dd.slice/crio-3cad56289f0e5e832432724e6b6edc210946bdeb17f4624ae31738c888b7c383 WatchSource:0}: Error finding container 3cad56289f0e5e832432724e6b6edc210946bdeb17f4624ae31738c888b7c383: Status 404 returned error can't find the container with id 3cad56289f0e5e832432724e6b6edc210946bdeb17f4624ae31738c888b7c383 Jan 29 16:47:49 crc kubenswrapper[4746]: I0129 16:47:49.655034 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:47:50 crc kubenswrapper[4746]: I0129 16:47:50.384739 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"3638d7699d354888da89723ea0a7801e67c37af27cf4d7fc2d221d9637b01dae"} Jan 29 16:47:50 crc kubenswrapper[4746]: I0129 16:47:50.386844 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2dmb7" event={"ID":"6c1f4d6c-4612-4843-b70e-dd016136b6dd","Type":"ContainerStarted","Data":"3cad56289f0e5e832432724e6b6edc210946bdeb17f4624ae31738c888b7c383"} Jan 29 16:47:51 crc kubenswrapper[4746]: I0129 16:47:51.395701 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c1f4d6c-4612-4843-b70e-dd016136b6dd" containerID="e21711954509fe2c9ca106d711b43d62ecb5be2f7f6e9ec29da99fcafcd6b84f" exitCode=0 Jan 29 16:47:51 crc kubenswrapper[4746]: I0129 16:47:51.395798 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2dmb7" event={"ID":"6c1f4d6c-4612-4843-b70e-dd016136b6dd","Type":"ContainerDied","Data":"e21711954509fe2c9ca106d711b43d62ecb5be2f7f6e9ec29da99fcafcd6b84f"} Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.629616 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.738456 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6c1f4d6c-4612-4843-b70e-dd016136b6dd-node-mnt\") pod \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.738551 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6c1f4d6c-4612-4843-b70e-dd016136b6dd-crc-storage\") pod \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.738551 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c1f4d6c-4612-4843-b70e-dd016136b6dd-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "6c1f4d6c-4612-4843-b70e-dd016136b6dd" (UID: "6c1f4d6c-4612-4843-b70e-dd016136b6dd"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.738617 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrhp\" (UniqueName: \"kubernetes.io/projected/6c1f4d6c-4612-4843-b70e-dd016136b6dd-kube-api-access-wwrhp\") pod \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\" (UID: \"6c1f4d6c-4612-4843-b70e-dd016136b6dd\") " Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.738800 4746 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/6c1f4d6c-4612-4843-b70e-dd016136b6dd-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.743608 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1f4d6c-4612-4843-b70e-dd016136b6dd-kube-api-access-wwrhp" (OuterVolumeSpecName: "kube-api-access-wwrhp") pod "6c1f4d6c-4612-4843-b70e-dd016136b6dd" (UID: "6c1f4d6c-4612-4843-b70e-dd016136b6dd"). InnerVolumeSpecName "kube-api-access-wwrhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.751908 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1f4d6c-4612-4843-b70e-dd016136b6dd-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "6c1f4d6c-4612-4843-b70e-dd016136b6dd" (UID: "6c1f4d6c-4612-4843-b70e-dd016136b6dd"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.839978 4746 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/6c1f4d6c-4612-4843-b70e-dd016136b6dd-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:52 crc kubenswrapper[4746]: I0129 16:47:52.840023 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrhp\" (UniqueName: \"kubernetes.io/projected/6c1f4d6c-4612-4843-b70e-dd016136b6dd-kube-api-access-wwrhp\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:53 crc kubenswrapper[4746]: I0129 16:47:53.410682 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2dmb7" event={"ID":"6c1f4d6c-4612-4843-b70e-dd016136b6dd","Type":"ContainerDied","Data":"3cad56289f0e5e832432724e6b6edc210946bdeb17f4624ae31738c888b7c383"} Jan 29 16:47:53 crc kubenswrapper[4746]: I0129 16:47:53.411065 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cad56289f0e5e832432724e6b6edc210946bdeb17f4624ae31738c888b7c383" Jan 29 16:47:53 crc kubenswrapper[4746]: I0129 16:47:53.411041 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2dmb7" Jan 29 16:47:57 crc kubenswrapper[4746]: I0129 16:47:57.181575 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zzrcf" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.121966 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw"] Jan 29 16:48:00 crc kubenswrapper[4746]: E0129 16:48:00.122583 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1f4d6c-4612-4843-b70e-dd016136b6dd" containerName="storage" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.122601 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1f4d6c-4612-4843-b70e-dd016136b6dd" containerName="storage" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.122732 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1f4d6c-4612-4843-b70e-dd016136b6dd" containerName="storage" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.123794 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.127388 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.135279 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.135379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8splt\" (UniqueName: \"kubernetes.io/projected/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-kube-api-access-8splt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.135413 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.137482 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw"] Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.237171 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.237319 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8splt\" (UniqueName: \"kubernetes.io/projected/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-kube-api-access-8splt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.237379 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.237719 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.238041 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.262110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8splt\" (UniqueName: \"kubernetes.io/projected/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-kube-api-access-8splt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.442924 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:00 crc kubenswrapper[4746]: I0129 16:48:00.692338 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw"] Jan 29 16:48:00 crc kubenswrapper[4746]: W0129 16:48:00.698944 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bc178bb_5ffb_4d68_b022_6b2025b2bfcb.slice/crio-6446c6def8f80863d11bf6f23724323c8df6c308836ba45cf437e66095e1e805 WatchSource:0}: Error finding container 6446c6def8f80863d11bf6f23724323c8df6c308836ba45cf437e66095e1e805: Status 404 returned error can't find the container with id 6446c6def8f80863d11bf6f23724323c8df6c308836ba45cf437e66095e1e805 Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.470270 4746 generic.go:334] "Generic (PLEG): container finished" podID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerID="bee82873e140beb31e8fefef03b9aa8235cfc3a5ae92e26a995dc0e93fd757b8" exitCode=0 Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.470363 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" event={"ID":"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb","Type":"ContainerDied","Data":"bee82873e140beb31e8fefef03b9aa8235cfc3a5ae92e26a995dc0e93fd757b8"} Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.470631 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" event={"ID":"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb","Type":"ContainerStarted","Data":"6446c6def8f80863d11bf6f23724323c8df6c308836ba45cf437e66095e1e805"} Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.931444 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmqz9"] Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.933854 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.948245 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmqz9"] Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.957932 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-utilities\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.958106 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lnn\" (UniqueName: \"kubernetes.io/projected/38fbb600-5c6f-46d3-bc27-bee771873244-kube-api-access-r6lnn\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:01 crc kubenswrapper[4746]: I0129 16:48:01.958344 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-catalog-content\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.060051 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-catalog-content\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.060352 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-utilities\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.060445 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6lnn\" (UniqueName: \"kubernetes.io/projected/38fbb600-5c6f-46d3-bc27-bee771873244-kube-api-access-r6lnn\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.061163 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-catalog-content\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.061244 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-utilities\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.082790 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6lnn\" (UniqueName: \"kubernetes.io/projected/38fbb600-5c6f-46d3-bc27-bee771873244-kube-api-access-r6lnn\") pod \"redhat-operators-vmqz9\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.263306 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:02 crc kubenswrapper[4746]: I0129 16:48:02.493036 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmqz9"] Jan 29 16:48:02 crc kubenswrapper[4746]: W0129 16:48:02.503690 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38fbb600_5c6f_46d3_bc27_bee771873244.slice/crio-d20ada1a11d068e610b4a2e2f420de3dbc3e9dbba4976024e57947ed7eb8d271 WatchSource:0}: Error finding container d20ada1a11d068e610b4a2e2f420de3dbc3e9dbba4976024e57947ed7eb8d271: Status 404 returned error can't find the container with id d20ada1a11d068e610b4a2e2f420de3dbc3e9dbba4976024e57947ed7eb8d271 Jan 29 16:48:03 crc kubenswrapper[4746]: I0129 16:48:03.488076 4746 generic.go:334] "Generic (PLEG): container finished" podID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerID="3c46247804a4d6718e66b026d40caa69660b4a8ebcbd2372961d70cb96c4e59d" exitCode=0 Jan 29 16:48:03 crc kubenswrapper[4746]: I0129 16:48:03.488283 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" event={"ID":"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb","Type":"ContainerDied","Data":"3c46247804a4d6718e66b026d40caa69660b4a8ebcbd2372961d70cb96c4e59d"} Jan 29 16:48:03 crc kubenswrapper[4746]: I0129 16:48:03.492225 4746 generic.go:334] "Generic (PLEG): container finished" podID="38fbb600-5c6f-46d3-bc27-bee771873244" containerID="4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a" exitCode=0 Jan 29 16:48:03 crc kubenswrapper[4746]: I0129 16:48:03.492285 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerDied","Data":"4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a"} Jan 29 16:48:03 crc kubenswrapper[4746]: I0129 16:48:03.492327 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerStarted","Data":"d20ada1a11d068e610b4a2e2f420de3dbc3e9dbba4976024e57947ed7eb8d271"} Jan 29 16:48:04 crc kubenswrapper[4746]: I0129 16:48:04.502488 4746 generic.go:334] "Generic (PLEG): container finished" podID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerID="7879f7c53198ca37850911ebab2e08e79b27c4f48500981aa5fc20d9c5d6252d" exitCode=0 Jan 29 16:48:04 crc kubenswrapper[4746]: I0129 16:48:04.502548 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" event={"ID":"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb","Type":"ContainerDied","Data":"7879f7c53198ca37850911ebab2e08e79b27c4f48500981aa5fc20d9c5d6252d"} Jan 29 16:48:04 crc kubenswrapper[4746]: I0129 16:48:04.504784 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerStarted","Data":"a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b"} Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.513648 4746 generic.go:334] "Generic (PLEG): container finished" podID="38fbb600-5c6f-46d3-bc27-bee771873244" containerID="a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b" exitCode=0 Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.513749 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerDied","Data":"a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b"} Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.731921 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.907925 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8splt\" (UniqueName: \"kubernetes.io/projected/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-kube-api-access-8splt\") pod \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.908156 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-bundle\") pod \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.908249 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-util\") pod \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\" (UID: \"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb\") " Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.909151 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-bundle" (OuterVolumeSpecName: "bundle") pod "1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" (UID: "1bc178bb-5ffb-4d68-b022-6b2025b2bfcb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.909575 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:05 crc kubenswrapper[4746]: I0129 16:48:05.918111 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-kube-api-access-8splt" (OuterVolumeSpecName: "kube-api-access-8splt") pod "1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" (UID: "1bc178bb-5ffb-4d68-b022-6b2025b2bfcb"). InnerVolumeSpecName "kube-api-access-8splt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:06 crc kubenswrapper[4746]: I0129 16:48:06.011520 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8splt\" (UniqueName: \"kubernetes.io/projected/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-kube-api-access-8splt\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:06 crc kubenswrapper[4746]: I0129 16:48:06.444427 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-util" (OuterVolumeSpecName: "util") pod "1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" (UID: "1bc178bb-5ffb-4d68-b022-6b2025b2bfcb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:06 crc kubenswrapper[4746]: I0129 16:48:06.517444 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bc178bb-5ffb-4d68-b022-6b2025b2bfcb-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:06 crc kubenswrapper[4746]: I0129 16:48:06.527276 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" event={"ID":"1bc178bb-5ffb-4d68-b022-6b2025b2bfcb","Type":"ContainerDied","Data":"6446c6def8f80863d11bf6f23724323c8df6c308836ba45cf437e66095e1e805"} Jan 29 16:48:06 crc kubenswrapper[4746]: I0129 16:48:06.527606 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6446c6def8f80863d11bf6f23724323c8df6c308836ba45cf437e66095e1e805" Jan 29 16:48:06 crc kubenswrapper[4746]: I0129 16:48:06.527695 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw" Jan 29 16:48:07 crc kubenswrapper[4746]: I0129 16:48:07.536563 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerStarted","Data":"3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f"} Jan 29 16:48:07 crc kubenswrapper[4746]: I0129 16:48:07.570836 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmqz9" podStartSLOduration=3.603586719 podStartE2EDuration="6.570818409s" podCreationTimestamp="2026-01-29 16:48:01 +0000 UTC" firstStartedPulling="2026-01-29 16:48:03.49590627 +0000 UTC m=+805.896490944" lastFinishedPulling="2026-01-29 16:48:06.46313795 +0000 UTC m=+808.863722634" observedRunningTime="2026-01-29 16:48:07.558557992 +0000 UTC m=+809.959142656" watchObservedRunningTime="2026-01-29 16:48:07.570818409 +0000 UTC m=+809.971403063" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.286382 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5zhtq"] Jan 29 16:48:08 crc kubenswrapper[4746]: E0129 16:48:08.286580 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="util" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.286592 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="util" Jan 29 16:48:08 crc kubenswrapper[4746]: E0129 16:48:08.286607 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="extract" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.286615 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="extract" Jan 29 16:48:08 crc kubenswrapper[4746]: E0129 16:48:08.286627 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="pull" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.286635 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="pull" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.286737 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc178bb-5ffb-4d68-b022-6b2025b2bfcb" containerName="extract" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.287077 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.288691 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lrxqw" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.288763 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.289057 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.298632 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5zhtq"] Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.338585 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bh9\" (UniqueName: \"kubernetes.io/projected/52ea40af-55b3-41e8-9afd-314054287d7d-kube-api-access-92bh9\") pod \"nmstate-operator-646758c888-5zhtq\" (UID: \"52ea40af-55b3-41e8-9afd-314054287d7d\") " pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.439739 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bh9\" (UniqueName: \"kubernetes.io/projected/52ea40af-55b3-41e8-9afd-314054287d7d-kube-api-access-92bh9\") pod \"nmstate-operator-646758c888-5zhtq\" (UID: \"52ea40af-55b3-41e8-9afd-314054287d7d\") " pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.467558 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bh9\" (UniqueName: \"kubernetes.io/projected/52ea40af-55b3-41e8-9afd-314054287d7d-kube-api-access-92bh9\") pod \"nmstate-operator-646758c888-5zhtq\" (UID: \"52ea40af-55b3-41e8-9afd-314054287d7d\") " pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.602091 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" Jan 29 16:48:08 crc kubenswrapper[4746]: I0129 16:48:08.803822 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-5zhtq"] Jan 29 16:48:08 crc kubenswrapper[4746]: W0129 16:48:08.809116 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ea40af_55b3_41e8_9afd_314054287d7d.slice/crio-e14126bb1b086a66713496e75c7d40c5f5b3ddc6c96223748025c9964c97013a WatchSource:0}: Error finding container e14126bb1b086a66713496e75c7d40c5f5b3ddc6c96223748025c9964c97013a: Status 404 returned error can't find the container with id e14126bb1b086a66713496e75c7d40c5f5b3ddc6c96223748025c9964c97013a Jan 29 16:48:09 crc kubenswrapper[4746]: I0129 16:48:09.550063 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" event={"ID":"52ea40af-55b3-41e8-9afd-314054287d7d","Type":"ContainerStarted","Data":"e14126bb1b086a66713496e75c7d40c5f5b3ddc6c96223748025c9964c97013a"} Jan 29 16:48:12 crc kubenswrapper[4746]: I0129 16:48:12.263587 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:12 crc kubenswrapper[4746]: I0129 16:48:12.263668 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:13 crc kubenswrapper[4746]: I0129 16:48:13.324722 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmqz9" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="registry-server" probeResult="failure" output=< Jan 29 16:48:13 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 29 16:48:13 crc kubenswrapper[4746]: > Jan 29 16:48:14 crc kubenswrapper[4746]: I0129 16:48:14.584085 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" event={"ID":"52ea40af-55b3-41e8-9afd-314054287d7d","Type":"ContainerStarted","Data":"dd657daa352cefaa5a417c45965ba671bbcd02512d31b373fc707d05cf91a3ab"} Jan 29 16:48:14 crc kubenswrapper[4746]: I0129 16:48:14.611015 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-5zhtq" podStartSLOduration=1.15798361 podStartE2EDuration="6.610987516s" podCreationTimestamp="2026-01-29 16:48:08 +0000 UTC" firstStartedPulling="2026-01-29 16:48:08.811494413 +0000 UTC m=+811.212079047" lastFinishedPulling="2026-01-29 16:48:14.264498269 +0000 UTC m=+816.665082953" observedRunningTime="2026-01-29 16:48:14.60778417 +0000 UTC m=+817.008368864" watchObservedRunningTime="2026-01-29 16:48:14.610987516 +0000 UTC m=+817.011572200" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.626973 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-r8rvf"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.628278 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.630266 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ddspj" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.651352 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-r8rvf"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.658973 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.660050 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.673695 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-zvdqn"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.674649 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.676410 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.682604 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.739573 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ndqq\" (UniqueName: \"kubernetes.io/projected/9e668f5d-ff4a-4ca4-801f-e45e15354829-kube-api-access-9ndqq\") pod \"nmstate-metrics-54757c584b-r8rvf\" (UID: \"9e668f5d-ff4a-4ca4-801f-e45e15354829\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.781541 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.782359 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.789462 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.789517 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.789685 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-thjwt" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.798098 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840621 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6n4n\" (UniqueName: \"kubernetes.io/projected/4c62df53-5227-44f8-b8b4-dd208f98ca28-kube-api-access-h6n4n\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840673 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-dbus-socket\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840719 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-ovs-socket\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840741 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5kt\" (UniqueName: \"kubernetes.io/projected/ea9bc95d-aaa3-4050-8840-868eb964a03a-kube-api-access-bd5kt\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840782 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ndqq\" (UniqueName: \"kubernetes.io/projected/9e668f5d-ff4a-4ca4-801f-e45e15354829-kube-api-access-9ndqq\") pod \"nmstate-metrics-54757c584b-r8rvf\" (UID: \"9e668f5d-ff4a-4ca4-801f-e45e15354829\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840802 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea9bc95d-aaa3-4050-8840-868eb964a03a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.840826 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-nmstate-lock\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.862677 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ndqq\" (UniqueName: \"kubernetes.io/projected/9e668f5d-ff4a-4ca4-801f-e45e15354829-kube-api-access-9ndqq\") pod \"nmstate-metrics-54757c584b-r8rvf\" (UID: \"9e668f5d-ff4a-4ca4-801f-e45e15354829\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.942290 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-ovs-socket\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.942380 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5kt\" (UniqueName: \"kubernetes.io/projected/ea9bc95d-aaa3-4050-8840-868eb964a03a-kube-api-access-bd5kt\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.942417 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfw96\" (UniqueName: \"kubernetes.io/projected/c27ce050-adb3-4698-a773-248eba35e281-kube-api-access-hfw96\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.942452 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-ovs-socket\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.942462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea9bc95d-aaa3-4050-8840-868eb964a03a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:15 crc kubenswrapper[4746]: E0129 16:48:15.942956 4746 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.942992 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-nmstate-lock\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: E0129 16:48:15.943039 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea9bc95d-aaa3-4050-8840-868eb964a03a-tls-key-pair podName:ea9bc95d-aaa3-4050-8840-868eb964a03a nodeName:}" failed. No retries permitted until 2026-01-29 16:48:16.443014905 +0000 UTC m=+818.843599549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ea9bc95d-aaa3-4050-8840-868eb964a03a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-27h58" (UID: "ea9bc95d-aaa3-4050-8840-868eb964a03a") : secret "openshift-nmstate-webhook" not found Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.943043 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-nmstate-lock\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.943070 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c27ce050-adb3-4698-a773-248eba35e281-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.943126 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6n4n\" (UniqueName: \"kubernetes.io/projected/4c62df53-5227-44f8-b8b4-dd208f98ca28-kube-api-access-h6n4n\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.943163 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c27ce050-adb3-4698-a773-248eba35e281-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.943202 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-dbus-socket\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.943546 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4c62df53-5227-44f8-b8b4-dd208f98ca28-dbus-socket\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.944800 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.960382 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5kt\" (UniqueName: \"kubernetes.io/projected/ea9bc95d-aaa3-4050-8840-868eb964a03a-kube-api-access-bd5kt\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.969508 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6n4n\" (UniqueName: \"kubernetes.io/projected/4c62df53-5227-44f8-b8b4-dd208f98ca28-kube-api-access-h6n4n\") pod \"nmstate-handler-zvdqn\" (UID: \"4c62df53-5227-44f8-b8b4-dd208f98ca28\") " pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.984500 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c6b7c599c-m2tw8"] Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.986179 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:15 crc kubenswrapper[4746]: I0129 16:48:15.989935 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.005113 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c6b7c599c-m2tw8"] Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.045788 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdr6r\" (UniqueName: \"kubernetes.io/projected/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-kube-api-access-cdr6r\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.045856 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-trusted-ca-bundle\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.045903 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-service-ca\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.045930 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-oauth-config\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.046001 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c27ce050-adb3-4698-a773-248eba35e281-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.046036 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c27ce050-adb3-4698-a773-248eba35e281-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.046069 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-config\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.046097 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-serving-cert\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: E0129 16:48:16.046101 4746 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.046121 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-oauth-serving-cert\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.046143 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfw96\" (UniqueName: \"kubernetes.io/projected/c27ce050-adb3-4698-a773-248eba35e281-kube-api-access-hfw96\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: E0129 16:48:16.046236 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c27ce050-adb3-4698-a773-248eba35e281-plugin-serving-cert podName:c27ce050-adb3-4698-a773-248eba35e281 nodeName:}" failed. No retries permitted until 2026-01-29 16:48:16.546173565 +0000 UTC m=+818.946758209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c27ce050-adb3-4698-a773-248eba35e281-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-grjr9" (UID: "c27ce050-adb3-4698-a773-248eba35e281") : secret "plugin-serving-cert" not found Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.049594 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c27ce050-adb3-4698-a773-248eba35e281-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.065502 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfw96\" (UniqueName: \"kubernetes.io/projected/c27ce050-adb3-4698-a773-248eba35e281-kube-api-access-hfw96\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.148833 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-config\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.148908 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-serving-cert\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.148931 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-oauth-serving-cert\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.148976 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdr6r\" (UniqueName: \"kubernetes.io/projected/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-kube-api-access-cdr6r\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.149009 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-trusted-ca-bundle\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.149064 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-service-ca\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.149081 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-oauth-config\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.151594 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-service-ca\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.151600 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-oauth-serving-cert\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.152000 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-trusted-ca-bundle\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.152377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-config\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.155848 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-oauth-config\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.156590 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-console-serving-cert\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.169640 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdr6r\" (UniqueName: \"kubernetes.io/projected/e42d5e75-7017-4fd5-97c7-01f16f64d9f2-kube-api-access-cdr6r\") pod \"console-5c6b7c599c-m2tw8\" (UID: \"e42d5e75-7017-4fd5-97c7-01f16f64d9f2\") " pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.175822 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-r8rvf"] Jan 29 16:48:16 crc kubenswrapper[4746]: W0129 16:48:16.186881 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e668f5d_ff4a_4ca4_801f_e45e15354829.slice/crio-4a95ab18e0b0050d530e7806da7be5376b69d5b0aab9df56f03316c01aa2d4be WatchSource:0}: Error finding container 4a95ab18e0b0050d530e7806da7be5376b69d5b0aab9df56f03316c01aa2d4be: Status 404 returned error can't find the container with id 4a95ab18e0b0050d530e7806da7be5376b69d5b0aab9df56f03316c01aa2d4be Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.350893 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.452283 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea9bc95d-aaa3-4050-8840-868eb964a03a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.467704 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ea9bc95d-aaa3-4050-8840-868eb964a03a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-27h58\" (UID: \"ea9bc95d-aaa3-4050-8840-868eb964a03a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.554788 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c27ce050-adb3-4698-a773-248eba35e281-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.565398 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c6b7c599c-m2tw8"] Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.567605 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c27ce050-adb3-4698-a773-248eba35e281-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-grjr9\" (UID: \"c27ce050-adb3-4698-a773-248eba35e281\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: W0129 16:48:16.570482 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode42d5e75_7017_4fd5_97c7_01f16f64d9f2.slice/crio-416897e3e7e87fe0d710a294d0f9f1108ea6d1ba11e6f6f89c690f9479c3918d WatchSource:0}: Error finding container 416897e3e7e87fe0d710a294d0f9f1108ea6d1ba11e6f6f89c690f9479c3918d: Status 404 returned error can't find the container with id 416897e3e7e87fe0d710a294d0f9f1108ea6d1ba11e6f6f89c690f9479c3918d Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.576629 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.595069 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zvdqn" event={"ID":"4c62df53-5227-44f8-b8b4-dd208f98ca28","Type":"ContainerStarted","Data":"356d0b099590f91e08ae99920aec3c01f0480d91bc5c5c1b6ed70fd57c3c291f"} Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.596383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" event={"ID":"9e668f5d-ff4a-4ca4-801f-e45e15354829","Type":"ContainerStarted","Data":"4a95ab18e0b0050d530e7806da7be5376b69d5b0aab9df56f03316c01aa2d4be"} Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.597878 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c6b7c599c-m2tw8" event={"ID":"e42d5e75-7017-4fd5-97c7-01f16f64d9f2","Type":"ContainerStarted","Data":"416897e3e7e87fe0d710a294d0f9f1108ea6d1ba11e6f6f89c690f9479c3918d"} Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.703091 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.777869 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58"] Jan 29 16:48:16 crc kubenswrapper[4746]: W0129 16:48:16.787753 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9bc95d_aaa3_4050_8840_868eb964a03a.slice/crio-898dd9efca8be31ef00d4fe4380ecd23c8b20175df95697039ed0d6b62679fdc WatchSource:0}: Error finding container 898dd9efca8be31ef00d4fe4380ecd23c8b20175df95697039ed0d6b62679fdc: Status 404 returned error can't find the container with id 898dd9efca8be31ef00d4fe4380ecd23c8b20175df95697039ed0d6b62679fdc Jan 29 16:48:16 crc kubenswrapper[4746]: I0129 16:48:16.910280 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9"] Jan 29 16:48:16 crc kubenswrapper[4746]: W0129 16:48:16.917767 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc27ce050_adb3_4698_a773_248eba35e281.slice/crio-76003083d69eaee5e98f7ecff9166c96590d3d2f2609563607930fb776e8a617 WatchSource:0}: Error finding container 76003083d69eaee5e98f7ecff9166c96590d3d2f2609563607930fb776e8a617: Status 404 returned error can't find the container with id 76003083d69eaee5e98f7ecff9166c96590d3d2f2609563607930fb776e8a617 Jan 29 16:48:17 crc kubenswrapper[4746]: I0129 16:48:17.602654 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" event={"ID":"ea9bc95d-aaa3-4050-8840-868eb964a03a","Type":"ContainerStarted","Data":"898dd9efca8be31ef00d4fe4380ecd23c8b20175df95697039ed0d6b62679fdc"} Jan 29 16:48:17 crc kubenswrapper[4746]: I0129 16:48:17.603815 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c6b7c599c-m2tw8" event={"ID":"e42d5e75-7017-4fd5-97c7-01f16f64d9f2","Type":"ContainerStarted","Data":"9f44e500ac4e511454666b1bc44507af40b454cf7720bcb1dbe965a3164abc26"} Jan 29 16:48:17 crc kubenswrapper[4746]: I0129 16:48:17.605802 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" event={"ID":"c27ce050-adb3-4698-a773-248eba35e281","Type":"ContainerStarted","Data":"76003083d69eaee5e98f7ecff9166c96590d3d2f2609563607930fb776e8a617"} Jan 29 16:48:17 crc kubenswrapper[4746]: I0129 16:48:17.624003 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c6b7c599c-m2tw8" podStartSLOduration=2.623983126 podStartE2EDuration="2.623983126s" podCreationTimestamp="2026-01-29 16:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:48:17.623838253 +0000 UTC m=+820.024422937" watchObservedRunningTime="2026-01-29 16:48:17.623983126 +0000 UTC m=+820.024567770" Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.617915 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" event={"ID":"c27ce050-adb3-4698-a773-248eba35e281","Type":"ContainerStarted","Data":"aee27ab460d1a65a165d76de2fa6686453fc5fd3f9b432225a76ece01190f903"} Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.620117 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" event={"ID":"ea9bc95d-aaa3-4050-8840-868eb964a03a","Type":"ContainerStarted","Data":"b5b18bceb875e147f5cd5ad59d50993b721830f1516723f543fa2e32ed9d9135"} Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.620366 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.622357 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zvdqn" event={"ID":"4c62df53-5227-44f8-b8b4-dd208f98ca28","Type":"ContainerStarted","Data":"b32ad8d0fd4ab4cccece183a2a1afd14384abc5629b732c4cf90b54982180bad"} Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.622443 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.654052 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-zvdqn" podStartSLOduration=2.20099401 podStartE2EDuration="4.654035584s" podCreationTimestamp="2026-01-29 16:48:15 +0000 UTC" firstStartedPulling="2026-01-29 16:48:16.036055045 +0000 UTC m=+818.436639689" lastFinishedPulling="2026-01-29 16:48:18.489096619 +0000 UTC m=+820.889681263" observedRunningTime="2026-01-29 16:48:19.652807251 +0000 UTC m=+822.053391955" watchObservedRunningTime="2026-01-29 16:48:19.654035584 +0000 UTC m=+822.054620228" Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.656383 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-grjr9" podStartSLOduration=2.265813199 podStartE2EDuration="4.656369166s" podCreationTimestamp="2026-01-29 16:48:15 +0000 UTC" firstStartedPulling="2026-01-29 16:48:16.92077345 +0000 UTC m=+819.321358094" lastFinishedPulling="2026-01-29 16:48:19.311329407 +0000 UTC m=+821.711914061" observedRunningTime="2026-01-29 16:48:19.633434044 +0000 UTC m=+822.034018698" watchObservedRunningTime="2026-01-29 16:48:19.656369166 +0000 UTC m=+822.056953820" Jan 29 16:48:19 crc kubenswrapper[4746]: I0129 16:48:19.685157 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" podStartSLOduration=2.99738525 podStartE2EDuration="4.685135253s" podCreationTimestamp="2026-01-29 16:48:15 +0000 UTC" firstStartedPulling="2026-01-29 16:48:16.790129207 +0000 UTC m=+819.190713851" lastFinishedPulling="2026-01-29 16:48:18.47787921 +0000 UTC m=+820.878463854" observedRunningTime="2026-01-29 16:48:19.683845658 +0000 UTC m=+822.084430312" watchObservedRunningTime="2026-01-29 16:48:19.685135253 +0000 UTC m=+822.085719907" Jan 29 16:48:22 crc kubenswrapper[4746]: I0129 16:48:22.341117 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:22 crc kubenswrapper[4746]: I0129 16:48:22.401702 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:22 crc kubenswrapper[4746]: I0129 16:48:22.584074 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmqz9"] Jan 29 16:48:23 crc kubenswrapper[4746]: I0129 16:48:23.647374 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmqz9" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="registry-server" containerID="cri-o://3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f" gracePeriod=2 Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.072829 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.168850 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-utilities\") pod \"38fbb600-5c6f-46d3-bc27-bee771873244\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.168926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-catalog-content\") pod \"38fbb600-5c6f-46d3-bc27-bee771873244\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.168983 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6lnn\" (UniqueName: \"kubernetes.io/projected/38fbb600-5c6f-46d3-bc27-bee771873244-kube-api-access-r6lnn\") pod \"38fbb600-5c6f-46d3-bc27-bee771873244\" (UID: \"38fbb600-5c6f-46d3-bc27-bee771873244\") " Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.170894 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-utilities" (OuterVolumeSpecName: "utilities") pod "38fbb600-5c6f-46d3-bc27-bee771873244" (UID: "38fbb600-5c6f-46d3-bc27-bee771873244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.174936 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fbb600-5c6f-46d3-bc27-bee771873244-kube-api-access-r6lnn" (OuterVolumeSpecName: "kube-api-access-r6lnn") pod "38fbb600-5c6f-46d3-bc27-bee771873244" (UID: "38fbb600-5c6f-46d3-bc27-bee771873244"). InnerVolumeSpecName "kube-api-access-r6lnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.270638 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.270668 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6lnn\" (UniqueName: \"kubernetes.io/projected/38fbb600-5c6f-46d3-bc27-bee771873244-kube-api-access-r6lnn\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.279355 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38fbb600-5c6f-46d3-bc27-bee771873244" (UID: "38fbb600-5c6f-46d3-bc27-bee771873244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.372292 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38fbb600-5c6f-46d3-bc27-bee771873244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.655674 4746 generic.go:334] "Generic (PLEG): container finished" podID="38fbb600-5c6f-46d3-bc27-bee771873244" containerID="3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f" exitCode=0 Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.655750 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerDied","Data":"3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f"} Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.655765 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmqz9" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.655789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmqz9" event={"ID":"38fbb600-5c6f-46d3-bc27-bee771873244","Type":"ContainerDied","Data":"d20ada1a11d068e610b4a2e2f420de3dbc3e9dbba4976024e57947ed7eb8d271"} Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.655813 4746 scope.go:117] "RemoveContainer" containerID="3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.675725 4746 scope.go:117] "RemoveContainer" containerID="a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.682823 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmqz9"] Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.688111 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmqz9"] Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.697087 4746 scope.go:117] "RemoveContainer" containerID="4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.714802 4746 scope.go:117] "RemoveContainer" containerID="3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f" Jan 29 16:48:24 crc kubenswrapper[4746]: E0129 16:48:24.715382 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f\": container with ID starting with 3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f not found: ID does not exist" containerID="3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.715418 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f"} err="failed to get container status \"3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f\": rpc error: code = NotFound desc = could not find container \"3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f\": container with ID starting with 3ee85290dbf4b145c99c1e70fcdf7fa840b186199b478c9be6e71bf182f0073f not found: ID does not exist" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.715445 4746 scope.go:117] "RemoveContainer" containerID="a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b" Jan 29 16:48:24 crc kubenswrapper[4746]: E0129 16:48:24.715782 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b\": container with ID starting with a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b not found: ID does not exist" containerID="a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.715805 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b"} err="failed to get container status \"a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b\": rpc error: code = NotFound desc = could not find container \"a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b\": container with ID starting with a110eddf3c584f157344a6d1161b68d2fa80c808f945592d1d9ba0eed689400b not found: ID does not exist" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.715818 4746 scope.go:117] "RemoveContainer" containerID="4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a" Jan 29 16:48:24 crc kubenswrapper[4746]: E0129 16:48:24.716149 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a\": container with ID starting with 4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a not found: ID does not exist" containerID="4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a" Jan 29 16:48:24 crc kubenswrapper[4746]: I0129 16:48:24.716169 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a"} err="failed to get container status \"4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a\": rpc error: code = NotFound desc = could not find container \"4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a\": container with ID starting with 4cde8bfd19212014164a42431b37d27b7239eb0523d67b9780b07c1ac2c6b99a not found: ID does not exist" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.009920 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-zvdqn" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.351746 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.351850 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.360814 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.452948 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" path="/var/lib/kubelet/pods/38fbb600-5c6f-46d3-bc27-bee771873244/volumes" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.676214 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c6b7c599c-m2tw8" Jan 29 16:48:26 crc kubenswrapper[4746]: I0129 16:48:26.742811 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-np5s4"] Jan 29 16:48:31 crc kubenswrapper[4746]: I0129 16:48:31.706057 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" event={"ID":"9e668f5d-ff4a-4ca4-801f-e45e15354829","Type":"ContainerStarted","Data":"e665b1a262196ea819f32582f0332793a390995135bd4ada8895d1022563dadb"} Jan 29 16:48:33 crc kubenswrapper[4746]: I0129 16:48:33.720309 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" event={"ID":"9e668f5d-ff4a-4ca4-801f-e45e15354829","Type":"ContainerStarted","Data":"a41e25b75a8d72caf6c88d456ad6e476d71054679e9b2b547aefe5d2b6b1290e"} Jan 29 16:48:36 crc kubenswrapper[4746]: I0129 16:48:36.584984 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-27h58" Jan 29 16:48:36 crc kubenswrapper[4746]: I0129 16:48:36.604421 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-r8rvf" podStartSLOduration=4.618629758 podStartE2EDuration="21.604396901s" podCreationTimestamp="2026-01-29 16:48:15 +0000 UTC" firstStartedPulling="2026-01-29 16:48:16.188999553 +0000 UTC m=+818.589584197" lastFinishedPulling="2026-01-29 16:48:33.174766696 +0000 UTC m=+835.575351340" observedRunningTime="2026-01-29 16:48:33.749162268 +0000 UTC m=+836.149746972" watchObservedRunningTime="2026-01-29 16:48:36.604396901 +0000 UTC m=+839.004981565" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.162366 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc"] Jan 29 16:48:50 crc kubenswrapper[4746]: E0129 16:48:50.163147 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="registry-server" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.163162 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="registry-server" Jan 29 16:48:50 crc kubenswrapper[4746]: E0129 16:48:50.163174 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="extract-utilities" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.163201 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="extract-utilities" Jan 29 16:48:50 crc kubenswrapper[4746]: E0129 16:48:50.163219 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="extract-content" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.163228 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="extract-content" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.163380 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fbb600-5c6f-46d3-bc27-bee771873244" containerName="registry-server" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.164396 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.166253 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.180016 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc"] Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.270621 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7swt\" (UniqueName: \"kubernetes.io/projected/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-kube-api-access-x7swt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.270888 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.271324 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.372480 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.372582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7swt\" (UniqueName: \"kubernetes.io/projected/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-kube-api-access-x7swt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.372614 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.373646 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.373736 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.395921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7swt\" (UniqueName: \"kubernetes.io/projected/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-kube-api-access-x7swt\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.535307 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:50 crc kubenswrapper[4746]: I0129 16:48:50.953173 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc"] Jan 29 16:48:51 crc kubenswrapper[4746]: I0129 16:48:51.786691 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-np5s4" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerName="console" containerID="cri-o://2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16" gracePeriod=15 Jan 29 16:48:51 crc kubenswrapper[4746]: I0129 16:48:51.834655 4746 generic.go:334] "Generic (PLEG): container finished" podID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerID="8b34d89f66bee859334cd625ff73eed95c3dc6f1be968dec515cd2e2a031e842" exitCode=0 Jan 29 16:48:51 crc kubenswrapper[4746]: I0129 16:48:51.834772 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" event={"ID":"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46","Type":"ContainerDied","Data":"8b34d89f66bee859334cd625ff73eed95c3dc6f1be968dec515cd2e2a031e842"} Jan 29 16:48:51 crc kubenswrapper[4746]: I0129 16:48:51.834980 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" event={"ID":"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46","Type":"ContainerStarted","Data":"320d0252164f8eb65bd4e2ec0cee90b7bac4257764c10c78e79f8f14af91363a"} Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.618491 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-np5s4_c3e1b3f9-082c-452a-b27c-b2eb6ca2b999/console/0.log" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.618554 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709543 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-service-ca\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709627 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-config\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-serving-cert\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709674 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c86p8\" (UniqueName: \"kubernetes.io/projected/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-kube-api-access-c86p8\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709753 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-oauth-serving-cert\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709805 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-oauth-config\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.709830 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-trusted-ca-bundle\") pod \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\" (UID: \"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999\") " Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.711007 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.711080 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-config" (OuterVolumeSpecName: "console-config") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.711619 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-service-ca" (OuterVolumeSpecName: "service-ca") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.711690 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.718286 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.718424 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.719338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-kube-api-access-c86p8" (OuterVolumeSpecName: "kube-api-access-c86p8") pod "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" (UID: "c3e1b3f9-082c-452a-b27c-b2eb6ca2b999"). InnerVolumeSpecName "kube-api-access-c86p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811635 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c86p8\" (UniqueName: \"kubernetes.io/projected/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-kube-api-access-c86p8\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811701 4746 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811712 4746 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811722 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811737 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811748 4746 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.811757 4746 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.846512 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-np5s4_c3e1b3f9-082c-452a-b27c-b2eb6ca2b999/console/0.log" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.846593 4746 generic.go:334] "Generic (PLEG): container finished" podID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerID="2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16" exitCode=2 Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.846646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-np5s4" event={"ID":"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999","Type":"ContainerDied","Data":"2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16"} Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.846705 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-np5s4" event={"ID":"c3e1b3f9-082c-452a-b27c-b2eb6ca2b999","Type":"ContainerDied","Data":"9221eface88c49f659ac01e19d6a647b255593621c7c4da2258901e5cbc5dcb8"} Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.846733 4746 scope.go:117] "RemoveContainer" containerID="2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.846731 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-np5s4" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.887235 4746 scope.go:117] "RemoveContainer" containerID="2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16" Jan 29 16:48:52 crc kubenswrapper[4746]: E0129 16:48:52.889954 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16\": container with ID starting with 2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16 not found: ID does not exist" containerID="2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.890000 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16"} err="failed to get container status \"2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16\": rpc error: code = NotFound desc = could not find container \"2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16\": container with ID starting with 2c219493d6b4729788ff9c674e7b2336684edfca0b70cf6107e9bf78bd3f0a16 not found: ID does not exist" Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.896033 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-np5s4"] Jan 29 16:48:52 crc kubenswrapper[4746]: I0129 16:48:52.903567 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-np5s4"] Jan 29 16:48:54 crc kubenswrapper[4746]: I0129 16:48:54.458306 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" path="/var/lib/kubelet/pods/c3e1b3f9-082c-452a-b27c-b2eb6ca2b999/volumes" Jan 29 16:48:54 crc kubenswrapper[4746]: I0129 16:48:54.860981 4746 generic.go:334] "Generic (PLEG): container finished" podID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerID="8710da89f26245ba4a00008dc3b209a0905c05de80fc7f85789e3cde3cbb1c88" exitCode=0 Jan 29 16:48:54 crc kubenswrapper[4746]: I0129 16:48:54.861041 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" event={"ID":"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46","Type":"ContainerDied","Data":"8710da89f26245ba4a00008dc3b209a0905c05de80fc7f85789e3cde3cbb1c88"} Jan 29 16:48:55 crc kubenswrapper[4746]: I0129 16:48:55.871709 4746 generic.go:334] "Generic (PLEG): container finished" podID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerID="5bb09f1e2a4cc045527bc027a1f722053483d8b121471dac0ee339b2341359a6" exitCode=0 Jan 29 16:48:55 crc kubenswrapper[4746]: I0129 16:48:55.871749 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" event={"ID":"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46","Type":"ContainerDied","Data":"5bb09f1e2a4cc045527bc027a1f722053483d8b121471dac0ee339b2341359a6"} Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.132346 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.277364 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-bundle\") pod \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.277826 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-util\") pod \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.277852 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7swt\" (UniqueName: \"kubernetes.io/projected/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-kube-api-access-x7swt\") pod \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\" (UID: \"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46\") " Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.278836 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-bundle" (OuterVolumeSpecName: "bundle") pod "bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" (UID: "bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.279212 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.284028 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-kube-api-access-x7swt" (OuterVolumeSpecName: "kube-api-access-x7swt") pod "bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" (UID: "bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46"). InnerVolumeSpecName "kube-api-access-x7swt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.295405 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-util" (OuterVolumeSpecName: "util") pod "bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" (UID: "bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.380397 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.380437 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7swt\" (UniqueName: \"kubernetes.io/projected/bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46-kube-api-access-x7swt\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.885771 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" event={"ID":"bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46","Type":"ContainerDied","Data":"320d0252164f8eb65bd4e2ec0cee90b7bac4257764c10c78e79f8f14af91363a"} Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.885825 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320d0252164f8eb65bd4e2ec0cee90b7bac4257764c10c78e79f8f14af91363a" Jan 29 16:48:57 crc kubenswrapper[4746]: I0129 16:48:57.885876 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.208953 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-59c79db488-76xvk"] Jan 29 16:49:07 crc kubenswrapper[4746]: E0129 16:49:07.209741 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="util" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.209752 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="util" Jan 29 16:49:07 crc kubenswrapper[4746]: E0129 16:49:07.209761 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerName="console" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.209769 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerName="console" Jan 29 16:49:07 crc kubenswrapper[4746]: E0129 16:49:07.209780 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="extract" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.209787 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="extract" Jan 29 16:49:07 crc kubenswrapper[4746]: E0129 16:49:07.209804 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="pull" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.209809 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="pull" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.209901 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e1b3f9-082c-452a-b27c-b2eb6ca2b999" containerName="console" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.209914 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46" containerName="extract" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.210298 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.212437 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.212510 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.212614 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.212864 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mwndn" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.213102 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.269206 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59c79db488-76xvk"] Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.299028 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d178b7f-e0fd-45ce-a609-e65c58e026ee-webhook-cert\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.299077 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d178b7f-e0fd-45ce-a609-e65c58e026ee-apiservice-cert\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.299103 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqlb\" (UniqueName: \"kubernetes.io/projected/8d178b7f-e0fd-45ce-a609-e65c58e026ee-kube-api-access-5dqlb\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.400261 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d178b7f-e0fd-45ce-a609-e65c58e026ee-webhook-cert\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.400315 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d178b7f-e0fd-45ce-a609-e65c58e026ee-apiservice-cert\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.400340 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqlb\" (UniqueName: \"kubernetes.io/projected/8d178b7f-e0fd-45ce-a609-e65c58e026ee-kube-api-access-5dqlb\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.405696 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8d178b7f-e0fd-45ce-a609-e65c58e026ee-webhook-cert\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.405720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8d178b7f-e0fd-45ce-a609-e65c58e026ee-apiservice-cert\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.425554 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqlb\" (UniqueName: \"kubernetes.io/projected/8d178b7f-e0fd-45ce-a609-e65c58e026ee-kube-api-access-5dqlb\") pod \"metallb-operator-controller-manager-59c79db488-76xvk\" (UID: \"8d178b7f-e0fd-45ce-a609-e65c58e026ee\") " pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.524291 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.554694 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w"] Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.555391 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.560485 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.560501 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.560737 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-q45zm" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.589521 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w"] Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.708214 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652qx\" (UniqueName: \"kubernetes.io/projected/92209676-8aa8-4779-85ab-ff8f430449b2-kube-api-access-652qx\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.708291 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92209676-8aa8-4779-85ab-ff8f430449b2-apiservice-cert\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.708334 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92209676-8aa8-4779-85ab-ff8f430449b2-webhook-cert\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.773053 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-59c79db488-76xvk"] Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.809449 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92209676-8aa8-4779-85ab-ff8f430449b2-webhook-cert\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.809717 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-652qx\" (UniqueName: \"kubernetes.io/projected/92209676-8aa8-4779-85ab-ff8f430449b2-kube-api-access-652qx\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.809768 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92209676-8aa8-4779-85ab-ff8f430449b2-apiservice-cert\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.814816 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92209676-8aa8-4779-85ab-ff8f430449b2-apiservice-cert\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.816892 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92209676-8aa8-4779-85ab-ff8f430449b2-webhook-cert\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.824448 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-652qx\" (UniqueName: \"kubernetes.io/projected/92209676-8aa8-4779-85ab-ff8f430449b2-kube-api-access-652qx\") pod \"metallb-operator-webhook-server-5d57bc96cc-5fl5w\" (UID: \"92209676-8aa8-4779-85ab-ff8f430449b2\") " pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.922987 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:07 crc kubenswrapper[4746]: I0129 16:49:07.944273 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" event={"ID":"8d178b7f-e0fd-45ce-a609-e65c58e026ee","Type":"ContainerStarted","Data":"829e58b3015e5666e26d6ee51845d26357f9ecacd20411bc6ea6073cbe411e47"} Jan 29 16:49:08 crc kubenswrapper[4746]: I0129 16:49:08.309451 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w"] Jan 29 16:49:08 crc kubenswrapper[4746]: W0129 16:49:08.314005 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92209676_8aa8_4779_85ab_ff8f430449b2.slice/crio-079a47191612c953b2a9f0577d7c280db52b5fac56acc1f9fff20d23176a9035 WatchSource:0}: Error finding container 079a47191612c953b2a9f0577d7c280db52b5fac56acc1f9fff20d23176a9035: Status 404 returned error can't find the container with id 079a47191612c953b2a9f0577d7c280db52b5fac56acc1f9fff20d23176a9035 Jan 29 16:49:08 crc kubenswrapper[4746]: I0129 16:49:08.950153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" event={"ID":"92209676-8aa8-4779-85ab-ff8f430449b2","Type":"ContainerStarted","Data":"079a47191612c953b2a9f0577d7c280db52b5fac56acc1f9fff20d23176a9035"} Jan 29 16:49:12 crc kubenswrapper[4746]: I0129 16:49:12.969224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" event={"ID":"92209676-8aa8-4779-85ab-ff8f430449b2","Type":"ContainerStarted","Data":"ee38d1621c1f3fae2cc1d9deb38a90bb4bb078348a4d486d4c11396975aee496"} Jan 29 16:49:12 crc kubenswrapper[4746]: I0129 16:49:12.969837 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:12 crc kubenswrapper[4746]: I0129 16:49:12.991895 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" podStartSLOduration=1.857563901 podStartE2EDuration="5.991875001s" podCreationTimestamp="2026-01-29 16:49:07 +0000 UTC" firstStartedPulling="2026-01-29 16:49:08.31669102 +0000 UTC m=+870.717275664" lastFinishedPulling="2026-01-29 16:49:12.45100212 +0000 UTC m=+874.851586764" observedRunningTime="2026-01-29 16:49:12.987329735 +0000 UTC m=+875.387914399" watchObservedRunningTime="2026-01-29 16:49:12.991875001 +0000 UTC m=+875.392459655" Jan 29 16:49:16 crc kubenswrapper[4746]: I0129 16:49:16.988371 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" event={"ID":"8d178b7f-e0fd-45ce-a609-e65c58e026ee","Type":"ContainerStarted","Data":"54166eccd00596362284d3becf8e01519b0f5b09f41fd0c5fd5fab9f6b54370f"} Jan 29 16:49:16 crc kubenswrapper[4746]: I0129 16:49:16.988911 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:17 crc kubenswrapper[4746]: I0129 16:49:17.027659 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" podStartSLOduration=1.750293014 podStartE2EDuration="10.027637815s" podCreationTimestamp="2026-01-29 16:49:07 +0000 UTC" firstStartedPulling="2026-01-29 16:49:07.781041913 +0000 UTC m=+870.181626557" lastFinishedPulling="2026-01-29 16:49:16.058386714 +0000 UTC m=+878.458971358" observedRunningTime="2026-01-29 16:49:17.022645767 +0000 UTC m=+879.423230421" watchObservedRunningTime="2026-01-29 16:49:17.027637815 +0000 UTC m=+879.428222459" Jan 29 16:49:27 crc kubenswrapper[4746]: I0129 16:49:27.929486 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d57bc96cc-5fl5w" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.243052 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9h6"] Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.244973 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.262693 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9h6"] Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.347327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-catalog-content\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.347621 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrjz\" (UniqueName: \"kubernetes.io/projected/64c2771e-a965-4dd6-aed4-83291bff40dc-kube-api-access-ngrjz\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.347688 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-utilities\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.448907 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrjz\" (UniqueName: \"kubernetes.io/projected/64c2771e-a965-4dd6-aed4-83291bff40dc-kube-api-access-ngrjz\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.448974 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-utilities\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.449021 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-catalog-content\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.449770 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-utilities\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.451173 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-catalog-content\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.468457 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrjz\" (UniqueName: \"kubernetes.io/projected/64c2771e-a965-4dd6-aed4-83291bff40dc-kube-api-access-ngrjz\") pod \"redhat-marketplace-xv9h6\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:41 crc kubenswrapper[4746]: I0129 16:49:41.569384 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:42 crc kubenswrapper[4746]: I0129 16:49:42.003631 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9h6"] Jan 29 16:49:42 crc kubenswrapper[4746]: I0129 16:49:42.153699 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9h6" event={"ID":"64c2771e-a965-4dd6-aed4-83291bff40dc","Type":"ContainerStarted","Data":"cee11089158dbe9353949aa699fd5481f54e4efa3ae848d4544291cdb351b2e6"} Jan 29 16:49:43 crc kubenswrapper[4746]: I0129 16:49:43.163181 4746 generic.go:334] "Generic (PLEG): container finished" podID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerID="55f14534a801c25a34db259c822654e3564bba252f76344a869082a8cd6625e1" exitCode=0 Jan 29 16:49:43 crc kubenswrapper[4746]: I0129 16:49:43.163282 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9h6" event={"ID":"64c2771e-a965-4dd6-aed4-83291bff40dc","Type":"ContainerDied","Data":"55f14534a801c25a34db259c822654e3564bba252f76344a869082a8cd6625e1"} Jan 29 16:49:44 crc kubenswrapper[4746]: I0129 16:49:44.177233 4746 generic.go:334] "Generic (PLEG): container finished" podID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerID="40261d1c96e02a1d8dbc960dc7f8046b45754110b29a69624e1ce1464194b2e7" exitCode=0 Jan 29 16:49:44 crc kubenswrapper[4746]: I0129 16:49:44.177307 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9h6" event={"ID":"64c2771e-a965-4dd6-aed4-83291bff40dc","Type":"ContainerDied","Data":"40261d1c96e02a1d8dbc960dc7f8046b45754110b29a69624e1ce1464194b2e7"} Jan 29 16:49:45 crc kubenswrapper[4746]: I0129 16:49:45.188582 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9h6" event={"ID":"64c2771e-a965-4dd6-aed4-83291bff40dc","Type":"ContainerStarted","Data":"164a3b24fef2d0c183de13a20d8ea9608afdeef2e694db25ae40a35d45718251"} Jan 29 16:49:45 crc kubenswrapper[4746]: I0129 16:49:45.217158 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xv9h6" podStartSLOduration=2.744507859 podStartE2EDuration="4.217135777s" podCreationTimestamp="2026-01-29 16:49:41 +0000 UTC" firstStartedPulling="2026-01-29 16:49:43.167082129 +0000 UTC m=+905.567666773" lastFinishedPulling="2026-01-29 16:49:44.639710037 +0000 UTC m=+907.040294691" observedRunningTime="2026-01-29 16:49:45.216509671 +0000 UTC m=+907.617094315" watchObservedRunningTime="2026-01-29 16:49:45.217135777 +0000 UTC m=+907.617720431" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.528297 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-59c79db488-76xvk" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.632347 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m849x"] Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.634060 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.657782 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m849x"] Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.745559 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtqkp\" (UniqueName: \"kubernetes.io/projected/1c5ef100-0625-4a08-95c9-13b250a32fd9-kube-api-access-mtqkp\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.745628 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-catalog-content\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.745675 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-utilities\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.847025 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-catalog-content\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.847090 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-utilities\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.847143 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtqkp\" (UniqueName: \"kubernetes.io/projected/1c5ef100-0625-4a08-95c9-13b250a32fd9-kube-api-access-mtqkp\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.847518 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-catalog-content\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.847553 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-utilities\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.866508 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtqkp\" (UniqueName: \"kubernetes.io/projected/1c5ef100-0625-4a08-95c9-13b250a32fd9-kube-api-access-mtqkp\") pod \"community-operators-m849x\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:47 crc kubenswrapper[4746]: I0129 16:49:47.954557 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.238444 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m849x"] Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.882010 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv"] Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.882820 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.888246 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.888392 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-828vg" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.889301 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k8qp6"] Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.893165 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.894836 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv"] Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.895355 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.899502 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.964871 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-metrics\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.964919 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f54d72a4-5843-4b08-baf7-86689474f3e2-metrics-certs\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.964945 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7x7\" (UniqueName: \"kubernetes.io/projected/1bad14f2-00a0-4101-880f-fa01992db9d6-kube-api-access-7d7x7\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.964967 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-conf\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.964984 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-reloader\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.965007 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6q6\" (UniqueName: \"kubernetes.io/projected/f54d72a4-5843-4b08-baf7-86689474f3e2-kube-api-access-4h6q6\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.965219 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bad14f2-00a0-4101-880f-fa01992db9d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.965365 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-sockets\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.965406 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-startup\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.978576 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v2k9q"] Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.979436 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v2k9q" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.984602 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q8mbw" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.984602 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.984649 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.984680 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.985182 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-sllnl"] Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.986391 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:48 crc kubenswrapper[4746]: I0129 16:49:48.988336 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.013454 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sllnl"] Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.065435 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.065509 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066516 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066562 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-sockets\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066585 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-startup\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066676 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq47r\" (UniqueName: \"kubernetes.io/projected/b0e51f0a-824b-407e-ad15-09190e437c74-kube-api-access-jq47r\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066735 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjntw\" (UniqueName: \"kubernetes.io/projected/91614fd8-907a-4093-b290-20c533e82be5-kube-api-access-zjntw\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066775 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-metrics\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-metrics-certs\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.066945 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f54d72a4-5843-4b08-baf7-86689474f3e2-metrics-certs\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067065 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-metrics-certs\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067098 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7x7\" (UniqueName: \"kubernetes.io/projected/1bad14f2-00a0-4101-880f-fa01992db9d6-kube-api-access-7d7x7\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067132 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-metrics\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067159 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-conf\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067202 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-reloader\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067254 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6q6\" (UniqueName: \"kubernetes.io/projected/f54d72a4-5843-4b08-baf7-86689474f3e2-kube-api-access-4h6q6\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067160 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-sockets\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067330 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-cert\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067430 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-reloader\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067448 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b0e51f0a-824b-407e-ad15-09190e437c74-metallb-excludel2\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067496 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bad14f2-00a0-4101-880f-fa01992db9d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.067637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-startup\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.067678 4746 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.067756 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bad14f2-00a0-4101-880f-fa01992db9d6-cert podName:1bad14f2-00a0-4101-880f-fa01992db9d6 nodeName:}" failed. No retries permitted until 2026-01-29 16:49:49.567733061 +0000 UTC m=+911.968317705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1bad14f2-00a0-4101-880f-fa01992db9d6-cert") pod "frr-k8s-webhook-server-7df86c4f6c-7z9cv" (UID: "1bad14f2-00a0-4101-880f-fa01992db9d6") : secret "frr-k8s-webhook-server-cert" not found Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.068017 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f54d72a4-5843-4b08-baf7-86689474f3e2-frr-conf\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.089140 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6q6\" (UniqueName: \"kubernetes.io/projected/f54d72a4-5843-4b08-baf7-86689474f3e2-kube-api-access-4h6q6\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.090878 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f54d72a4-5843-4b08-baf7-86689474f3e2-metrics-certs\") pod \"frr-k8s-k8qp6\" (UID: \"f54d72a4-5843-4b08-baf7-86689474f3e2\") " pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.107101 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7x7\" (UniqueName: \"kubernetes.io/projected/1bad14f2-00a0-4101-880f-fa01992db9d6-kube-api-access-7d7x7\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.169668 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-metrics-certs\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.169758 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-metrics-certs\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.169830 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-cert\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.169855 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b0e51f0a-824b-407e-ad15-09190e437c74-metallb-excludel2\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.169880 4746 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.169909 4746 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.169938 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.169993 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-metrics-certs podName:91614fd8-907a-4093-b290-20c533e82be5 nodeName:}" failed. No retries permitted until 2026-01-29 16:49:49.669943233 +0000 UTC m=+912.070527877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-metrics-certs") pod "controller-6968d8fdc4-sllnl" (UID: "91614fd8-907a-4093-b290-20c533e82be5") : secret "controller-certs-secret" not found Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.170018 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-metrics-certs podName:b0e51f0a-824b-407e-ad15-09190e437c74 nodeName:}" failed. No retries permitted until 2026-01-29 16:49:49.670005134 +0000 UTC m=+912.070589778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-metrics-certs") pod "speaker-v2k9q" (UID: "b0e51f0a-824b-407e-ad15-09190e437c74") : secret "speaker-certs-secret" not found Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.170152 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.170155 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq47r\" (UniqueName: \"kubernetes.io/projected/b0e51f0a-824b-407e-ad15-09190e437c74-kube-api-access-jq47r\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.170263 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist podName:b0e51f0a-824b-407e-ad15-09190e437c74 nodeName:}" failed. No retries permitted until 2026-01-29 16:49:49.67023646 +0000 UTC m=+912.070821324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist") pod "speaker-v2k9q" (UID: "b0e51f0a-824b-407e-ad15-09190e437c74") : secret "metallb-memberlist" not found Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.170383 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjntw\" (UniqueName: \"kubernetes.io/projected/91614fd8-907a-4093-b290-20c533e82be5-kube-api-access-zjntw\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.171080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b0e51f0a-824b-407e-ad15-09190e437c74-metallb-excludel2\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.171763 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.184665 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-cert\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.189782 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq47r\" (UniqueName: \"kubernetes.io/projected/b0e51f0a-824b-407e-ad15-09190e437c74-kube-api-access-jq47r\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.194303 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjntw\" (UniqueName: \"kubernetes.io/projected/91614fd8-907a-4093-b290-20c533e82be5-kube-api-access-zjntw\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.214741 4746 generic.go:334] "Generic (PLEG): container finished" podID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerID="f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936" exitCode=0 Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.214814 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerDied","Data":"f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936"} Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.214888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerStarted","Data":"908d8f76154b72ccfea8e99feeaee97cc5b45e0c493e5c0b335996df5e0f554b"} Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.217805 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.575589 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bad14f2-00a0-4101-880f-fa01992db9d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.579856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bad14f2-00a0-4101-880f-fa01992db9d6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7z9cv\" (UID: \"1bad14f2-00a0-4101-880f-fa01992db9d6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.676492 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-metrics-certs\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.676591 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.676650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-metrics-certs\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.676956 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:49:49 crc kubenswrapper[4746]: E0129 16:49:49.677056 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist podName:b0e51f0a-824b-407e-ad15-09190e437c74 nodeName:}" failed. No retries permitted until 2026-01-29 16:49:50.67703325 +0000 UTC m=+913.077617904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist") pod "speaker-v2k9q" (UID: "b0e51f0a-824b-407e-ad15-09190e437c74") : secret "metallb-memberlist" not found Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.681005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91614fd8-907a-4093-b290-20c533e82be5-metrics-certs\") pod \"controller-6968d8fdc4-sllnl\" (UID: \"91614fd8-907a-4093-b290-20c533e82be5\") " pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.681476 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-metrics-certs\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.809660 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:49 crc kubenswrapper[4746]: I0129 16:49:49.903143 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:50 crc kubenswrapper[4746]: I0129 16:49:50.175193 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sllnl"] Jan 29 16:49:50 crc kubenswrapper[4746]: I0129 16:49:50.225085 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"6246faf41b5ffe44372fd415e2aca2b85b43fc034b2b451063f4a03d9e109933"} Jan 29 16:49:50 crc kubenswrapper[4746]: I0129 16:49:50.228151 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerStarted","Data":"37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d"} Jan 29 16:49:50 crc kubenswrapper[4746]: I0129 16:49:50.229525 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sllnl" event={"ID":"91614fd8-907a-4093-b290-20c533e82be5","Type":"ContainerStarted","Data":"06dbd816638f156510fd6ce330dd5d79a28e8d1654b4605ec416c92cdf244433"} Jan 29 16:49:50 crc kubenswrapper[4746]: I0129 16:49:50.582214 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv"] Jan 29 16:49:50 crc kubenswrapper[4746]: W0129 16:49:50.611543 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bad14f2_00a0_4101_880f_fa01992db9d6.slice/crio-2cc8536c0002f44b5af5144f259e483268a27b4139524e96130a51ff7fc5c2f6 WatchSource:0}: Error finding container 2cc8536c0002f44b5af5144f259e483268a27b4139524e96130a51ff7fc5c2f6: Status 404 returned error can't find the container with id 2cc8536c0002f44b5af5144f259e483268a27b4139524e96130a51ff7fc5c2f6 Jan 29 16:49:50 crc kubenswrapper[4746]: I0129 16:49:50.695869 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:50 crc kubenswrapper[4746]: E0129 16:49:50.696133 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:49:50 crc kubenswrapper[4746]: E0129 16:49:50.696539 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist podName:b0e51f0a-824b-407e-ad15-09190e437c74 nodeName:}" failed. No retries permitted until 2026-01-29 16:49:52.696513797 +0000 UTC m=+915.097098441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist") pod "speaker-v2k9q" (UID: "b0e51f0a-824b-407e-ad15-09190e437c74") : secret "metallb-memberlist" not found Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.241046 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" event={"ID":"1bad14f2-00a0-4101-880f-fa01992db9d6","Type":"ContainerStarted","Data":"2cc8536c0002f44b5af5144f259e483268a27b4139524e96130a51ff7fc5c2f6"} Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.246537 4746 generic.go:334] "Generic (PLEG): container finished" podID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerID="37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d" exitCode=0 Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.246659 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerDied","Data":"37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d"} Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.251054 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sllnl" event={"ID":"91614fd8-907a-4093-b290-20c533e82be5","Type":"ContainerStarted","Data":"49f96fc41ed5a8db79dda2898b24be107947e47b42a41004ca6a443ffd2533b9"} Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.251127 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sllnl" event={"ID":"91614fd8-907a-4093-b290-20c533e82be5","Type":"ContainerStarted","Data":"0095991d5e70903a19da7f1747bd22714fec411840604686bd75afb0411a3882"} Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.251283 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.300037 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-sllnl" podStartSLOduration=3.3000094300000002 podStartE2EDuration="3.30000943s" podCreationTimestamp="2026-01-29 16:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:49:51.29399588 +0000 UTC m=+913.694580514" watchObservedRunningTime="2026-01-29 16:49:51.30000943 +0000 UTC m=+913.700594074" Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.569613 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.569710 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:51 crc kubenswrapper[4746]: I0129 16:49:51.625271 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:52 crc kubenswrapper[4746]: I0129 16:49:52.276051 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerStarted","Data":"2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e"} Jan 29 16:49:52 crc kubenswrapper[4746]: I0129 16:49:52.314135 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m849x" podStartSLOduration=2.843548552 podStartE2EDuration="5.314116233s" podCreationTimestamp="2026-01-29 16:49:47 +0000 UTC" firstStartedPulling="2026-01-29 16:49:49.217401946 +0000 UTC m=+911.617986590" lastFinishedPulling="2026-01-29 16:49:51.687969627 +0000 UTC m=+914.088554271" observedRunningTime="2026-01-29 16:49:52.312612474 +0000 UTC m=+914.713197118" watchObservedRunningTime="2026-01-29 16:49:52.314116233 +0000 UTC m=+914.714700877" Jan 29 16:49:52 crc kubenswrapper[4746]: I0129 16:49:52.442039 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:52 crc kubenswrapper[4746]: I0129 16:49:52.730645 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:52 crc kubenswrapper[4746]: I0129 16:49:52.740917 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b0e51f0a-824b-407e-ad15-09190e437c74-memberlist\") pod \"speaker-v2k9q\" (UID: \"b0e51f0a-824b-407e-ad15-09190e437c74\") " pod="metallb-system/speaker-v2k9q" Jan 29 16:49:52 crc kubenswrapper[4746]: I0129 16:49:52.895284 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v2k9q" Jan 29 16:49:52 crc kubenswrapper[4746]: W0129 16:49:52.930281 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e51f0a_824b_407e_ad15_09190e437c74.slice/crio-3d6d0d930a0338b0204bb448856bf7ce9138baab23b018cc289e95a91c6b76af WatchSource:0}: Error finding container 3d6d0d930a0338b0204bb448856bf7ce9138baab23b018cc289e95a91c6b76af: Status 404 returned error can't find the container with id 3d6d0d930a0338b0204bb448856bf7ce9138baab23b018cc289e95a91c6b76af Jan 29 16:49:53 crc kubenswrapper[4746]: I0129 16:49:53.287489 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2k9q" event={"ID":"b0e51f0a-824b-407e-ad15-09190e437c74","Type":"ContainerStarted","Data":"bd82bfdc407c4c7f44d87801f7f99e89bf303c91a9177d330797abca94affe4a"} Jan 29 16:49:53 crc kubenswrapper[4746]: I0129 16:49:53.287821 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2k9q" event={"ID":"b0e51f0a-824b-407e-ad15-09190e437c74","Type":"ContainerStarted","Data":"3d6d0d930a0338b0204bb448856bf7ce9138baab23b018cc289e95a91c6b76af"} Jan 29 16:49:54 crc kubenswrapper[4746]: I0129 16:49:54.355046 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v2k9q" event={"ID":"b0e51f0a-824b-407e-ad15-09190e437c74","Type":"ContainerStarted","Data":"664f8ae5b43daaefd712a566d8232b4d77776e042f6024360804396be7ff3b87"} Jan 29 16:49:54 crc kubenswrapper[4746]: I0129 16:49:54.355315 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v2k9q" Jan 29 16:49:54 crc kubenswrapper[4746]: I0129 16:49:54.384885 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v2k9q" podStartSLOduration=6.384868632 podStartE2EDuration="6.384868632s" podCreationTimestamp="2026-01-29 16:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:49:54.380793814 +0000 UTC m=+916.781378458" watchObservedRunningTime="2026-01-29 16:49:54.384868632 +0000 UTC m=+916.785453266" Jan 29 16:49:55 crc kubenswrapper[4746]: I0129 16:49:55.230074 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9h6"] Jan 29 16:49:55 crc kubenswrapper[4746]: I0129 16:49:55.230645 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xv9h6" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="registry-server" containerID="cri-o://164a3b24fef2d0c183de13a20d8ea9608afdeef2e694db25ae40a35d45718251" gracePeriod=2 Jan 29 16:49:55 crc kubenswrapper[4746]: I0129 16:49:55.364458 4746 generic.go:334] "Generic (PLEG): container finished" podID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerID="164a3b24fef2d0c183de13a20d8ea9608afdeef2e694db25ae40a35d45718251" exitCode=0 Jan 29 16:49:55 crc kubenswrapper[4746]: I0129 16:49:55.364519 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9h6" event={"ID":"64c2771e-a965-4dd6-aed4-83291bff40dc","Type":"ContainerDied","Data":"164a3b24fef2d0c183de13a20d8ea9608afdeef2e694db25ae40a35d45718251"} Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.289877 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.381910 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9h6" event={"ID":"64c2771e-a965-4dd6-aed4-83291bff40dc","Type":"ContainerDied","Data":"cee11089158dbe9353949aa699fd5481f54e4efa3ae848d4544291cdb351b2e6"} Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.382600 4746 scope.go:117] "RemoveContainer" containerID="164a3b24fef2d0c183de13a20d8ea9608afdeef2e694db25ae40a35d45718251" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.381950 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9h6" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.384565 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" event={"ID":"1bad14f2-00a0-4101-880f-fa01992db9d6","Type":"ContainerStarted","Data":"753943147ad9de4b080c476ac10ed2744f1bdd6dd60e481edf36f800294efb9b"} Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.384664 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.386709 4746 generic.go:334] "Generic (PLEG): container finished" podID="f54d72a4-5843-4b08-baf7-86689474f3e2" containerID="89f50a4aeb66243e0d51dc3ee95d38139de6e64e2dbf2ee84ea60072e4a9c59a" exitCode=0 Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.386772 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerDied","Data":"89f50a4aeb66243e0d51dc3ee95d38139de6e64e2dbf2ee84ea60072e4a9c59a"} Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.405463 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" podStartSLOduration=2.928317587 podStartE2EDuration="9.405435904s" podCreationTimestamp="2026-01-29 16:49:48 +0000 UTC" firstStartedPulling="2026-01-29 16:49:50.614641278 +0000 UTC m=+913.015225922" lastFinishedPulling="2026-01-29 16:49:57.091759595 +0000 UTC m=+919.492344239" observedRunningTime="2026-01-29 16:49:57.398688184 +0000 UTC m=+919.799272838" watchObservedRunningTime="2026-01-29 16:49:57.405435904 +0000 UTC m=+919.806020548" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.406401 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-utilities\") pod \"64c2771e-a965-4dd6-aed4-83291bff40dc\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.406536 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngrjz\" (UniqueName: \"kubernetes.io/projected/64c2771e-a965-4dd6-aed4-83291bff40dc-kube-api-access-ngrjz\") pod \"64c2771e-a965-4dd6-aed4-83291bff40dc\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.406806 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-catalog-content\") pod \"64c2771e-a965-4dd6-aed4-83291bff40dc\" (UID: \"64c2771e-a965-4dd6-aed4-83291bff40dc\") " Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.407563 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-utilities" (OuterVolumeSpecName: "utilities") pod "64c2771e-a965-4dd6-aed4-83291bff40dc" (UID: "64c2771e-a965-4dd6-aed4-83291bff40dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.411602 4746 scope.go:117] "RemoveContainer" containerID="40261d1c96e02a1d8dbc960dc7f8046b45754110b29a69624e1ce1464194b2e7" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.413091 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c2771e-a965-4dd6-aed4-83291bff40dc-kube-api-access-ngrjz" (OuterVolumeSpecName: "kube-api-access-ngrjz") pod "64c2771e-a965-4dd6-aed4-83291bff40dc" (UID: "64c2771e-a965-4dd6-aed4-83291bff40dc"). InnerVolumeSpecName "kube-api-access-ngrjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.432037 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64c2771e-a965-4dd6-aed4-83291bff40dc" (UID: "64c2771e-a965-4dd6-aed4-83291bff40dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.443325 4746 scope.go:117] "RemoveContainer" containerID="55f14534a801c25a34db259c822654e3564bba252f76344a869082a8cd6625e1" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.509287 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.509317 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c2771e-a965-4dd6-aed4-83291bff40dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.509329 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngrjz\" (UniqueName: \"kubernetes.io/projected/64c2771e-a965-4dd6-aed4-83291bff40dc-kube-api-access-ngrjz\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.724447 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9h6"] Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.728537 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9h6"] Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.955898 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:57 crc kubenswrapper[4746]: I0129 16:49:57.955944 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:58 crc kubenswrapper[4746]: I0129 16:49:58.014747 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:58 crc kubenswrapper[4746]: I0129 16:49:58.395046 4746 generic.go:334] "Generic (PLEG): container finished" podID="f54d72a4-5843-4b08-baf7-86689474f3e2" containerID="12a2af477efa65fb7dda9fbe93ed88357b1e40de8501f6156eb7ef29c3e1a6d5" exitCode=0 Jan 29 16:49:58 crc kubenswrapper[4746]: I0129 16:49:58.395340 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerDied","Data":"12a2af477efa65fb7dda9fbe93ed88357b1e40de8501f6156eb7ef29c3e1a6d5"} Jan 29 16:49:58 crc kubenswrapper[4746]: I0129 16:49:58.461620 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" path="/var/lib/kubelet/pods/64c2771e-a965-4dd6-aed4-83291bff40dc/volumes" Jan 29 16:49:58 crc kubenswrapper[4746]: I0129 16:49:58.465210 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:49:59 crc kubenswrapper[4746]: I0129 16:49:59.410551 4746 generic.go:334] "Generic (PLEG): container finished" podID="f54d72a4-5843-4b08-baf7-86689474f3e2" containerID="b11ab93e859b4b334be3e931742e6387cdb7c328a2f0a726c6523116938f57e7" exitCode=0 Jan 29 16:49:59 crc kubenswrapper[4746]: I0129 16:49:59.411778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerDied","Data":"b11ab93e859b4b334be3e931742e6387cdb7c328a2f0a726c6523116938f57e7"} Jan 29 16:50:00 crc kubenswrapper[4746]: I0129 16:50:00.436484 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"aeeb301ae7bfb383bfea8c45fcfcb28993ed998fe1e2aaa63447905acbdcb811"} Jan 29 16:50:00 crc kubenswrapper[4746]: I0129 16:50:00.436846 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"ab177f8dfc46b266c4258987eb273168a6eaa59cfea7039b29d66b2761c6f793"} Jan 29 16:50:00 crc kubenswrapper[4746]: I0129 16:50:00.436859 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"6ef3c498147c462a3739a3d8debba977d824cd61bd1a286ac0ccad06011a9a23"} Jan 29 16:50:00 crc kubenswrapper[4746]: I0129 16:50:00.436870 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"62c66f4d31181aacd1cdeb73e98c8a56c6411739045599a25732e550972999b3"} Jan 29 16:50:00 crc kubenswrapper[4746]: I0129 16:50:00.436881 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"30994dad7bec040eb18b33c7e1a74f366132181a9965a6588dbaed9fba5f4989"} Jan 29 16:50:01 crc kubenswrapper[4746]: I0129 16:50:01.449001 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8qp6" event={"ID":"f54d72a4-5843-4b08-baf7-86689474f3e2","Type":"ContainerStarted","Data":"e05de6b6fed3f7910612ab13610114f0592df9199edb5546baf170f8187b2f43"} Jan 29 16:50:01 crc kubenswrapper[4746]: I0129 16:50:01.449371 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:50:01 crc kubenswrapper[4746]: I0129 16:50:01.478025 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k8qp6" podStartSLOduration=5.843878183 podStartE2EDuration="13.478005727s" podCreationTimestamp="2026-01-29 16:49:48 +0000 UTC" firstStartedPulling="2026-01-29 16:49:49.472536217 +0000 UTC m=+911.873120861" lastFinishedPulling="2026-01-29 16:49:57.106663761 +0000 UTC m=+919.507248405" observedRunningTime="2026-01-29 16:50:01.474854383 +0000 UTC m=+923.875439067" watchObservedRunningTime="2026-01-29 16:50:01.478005727 +0000 UTC m=+923.878590391" Jan 29 16:50:01 crc kubenswrapper[4746]: I0129 16:50:01.631285 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m849x"] Jan 29 16:50:01 crc kubenswrapper[4746]: I0129 16:50:01.631840 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m849x" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="registry-server" containerID="cri-o://2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e" gracePeriod=2 Jan 29 16:50:01 crc kubenswrapper[4746]: I0129 16:50:01.996803 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.073658 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-catalog-content\") pod \"1c5ef100-0625-4a08-95c9-13b250a32fd9\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.074050 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-utilities\") pod \"1c5ef100-0625-4a08-95c9-13b250a32fd9\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.074092 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtqkp\" (UniqueName: \"kubernetes.io/projected/1c5ef100-0625-4a08-95c9-13b250a32fd9-kube-api-access-mtqkp\") pod \"1c5ef100-0625-4a08-95c9-13b250a32fd9\" (UID: \"1c5ef100-0625-4a08-95c9-13b250a32fd9\") " Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.075023 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-utilities" (OuterVolumeSpecName: "utilities") pod "1c5ef100-0625-4a08-95c9-13b250a32fd9" (UID: "1c5ef100-0625-4a08-95c9-13b250a32fd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.081274 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5ef100-0625-4a08-95c9-13b250a32fd9-kube-api-access-mtqkp" (OuterVolumeSpecName: "kube-api-access-mtqkp") pod "1c5ef100-0625-4a08-95c9-13b250a32fd9" (UID: "1c5ef100-0625-4a08-95c9-13b250a32fd9"). InnerVolumeSpecName "kube-api-access-mtqkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.124621 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c5ef100-0625-4a08-95c9-13b250a32fd9" (UID: "1c5ef100-0625-4a08-95c9-13b250a32fd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.175851 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.175928 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtqkp\" (UniqueName: \"kubernetes.io/projected/1c5ef100-0625-4a08-95c9-13b250a32fd9-kube-api-access-mtqkp\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.175944 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ef100-0625-4a08-95c9-13b250a32fd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.459546 4746 generic.go:334] "Generic (PLEG): container finished" podID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerID="2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e" exitCode=0 Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.459626 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerDied","Data":"2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e"} Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.459652 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m849x" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.459685 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m849x" event={"ID":"1c5ef100-0625-4a08-95c9-13b250a32fd9","Type":"ContainerDied","Data":"908d8f76154b72ccfea8e99feeaee97cc5b45e0c493e5c0b335996df5e0f554b"} Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.459705 4746 scope.go:117] "RemoveContainer" containerID="2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.489141 4746 scope.go:117] "RemoveContainer" containerID="37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.499496 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m849x"] Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.506470 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m849x"] Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.514565 4746 scope.go:117] "RemoveContainer" containerID="f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.541597 4746 scope.go:117] "RemoveContainer" containerID="2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e" Jan 29 16:50:02 crc kubenswrapper[4746]: E0129 16:50:02.542722 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e\": container with ID starting with 2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e not found: ID does not exist" containerID="2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.542812 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e"} err="failed to get container status \"2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e\": rpc error: code = NotFound desc = could not find container \"2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e\": container with ID starting with 2c6e0f3ec85f4fe3993bc2c84afc1fcde9788a22bccc5967ae56eb14fd2b7e2e not found: ID does not exist" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.542857 4746 scope.go:117] "RemoveContainer" containerID="37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d" Jan 29 16:50:02 crc kubenswrapper[4746]: E0129 16:50:02.543565 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d\": container with ID starting with 37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d not found: ID does not exist" containerID="37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.543998 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d"} err="failed to get container status \"37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d\": rpc error: code = NotFound desc = could not find container \"37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d\": container with ID starting with 37d1070648908696666c1bc38db645eee8853d4661a3d4dae1b1e2fa4cf45a7d not found: ID does not exist" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.544022 4746 scope.go:117] "RemoveContainer" containerID="f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936" Jan 29 16:50:02 crc kubenswrapper[4746]: E0129 16:50:02.544489 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936\": container with ID starting with f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936 not found: ID does not exist" containerID="f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936" Jan 29 16:50:02 crc kubenswrapper[4746]: I0129 16:50:02.544840 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936"} err="failed to get container status \"f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936\": rpc error: code = NotFound desc = could not find container \"f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936\": container with ID starting with f1bb95fb8a511fe4bc82b92cc1d3c1357c0f78e3e13479a1324debefa443b936 not found: ID does not exist" Jan 29 16:50:04 crc kubenswrapper[4746]: I0129 16:50:04.218592 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:50:04 crc kubenswrapper[4746]: I0129 16:50:04.283612 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:50:04 crc kubenswrapper[4746]: I0129 16:50:04.454560 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" path="/var/lib/kubelet/pods/1c5ef100-0625-4a08-95c9-13b250a32fd9/volumes" Jan 29 16:50:09 crc kubenswrapper[4746]: I0129 16:50:09.222324 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k8qp6" Jan 29 16:50:09 crc kubenswrapper[4746]: I0129 16:50:09.814543 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7z9cv" Jan 29 16:50:09 crc kubenswrapper[4746]: I0129 16:50:09.907472 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-sllnl" Jan 29 16:50:12 crc kubenswrapper[4746]: I0129 16:50:12.900305 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v2k9q" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478114 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7"] Jan 29 16:50:14 crc kubenswrapper[4746]: E0129 16:50:14.478482 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="registry-server" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478502 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="registry-server" Jan 29 16:50:14 crc kubenswrapper[4746]: E0129 16:50:14.478518 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="extract-utilities" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478527 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="extract-utilities" Jan 29 16:50:14 crc kubenswrapper[4746]: E0129 16:50:14.478547 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="extract-utilities" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478555 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="extract-utilities" Jan 29 16:50:14 crc kubenswrapper[4746]: E0129 16:50:14.478565 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="extract-content" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478573 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="extract-content" Jan 29 16:50:14 crc kubenswrapper[4746]: E0129 16:50:14.478591 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="extract-content" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478600 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="extract-content" Jan 29 16:50:14 crc kubenswrapper[4746]: E0129 16:50:14.478614 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="registry-server" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478622 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="registry-server" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478748 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c2771e-a965-4dd6-aed4-83291bff40dc" containerName="registry-server" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.478771 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5ef100-0625-4a08-95c9-13b250a32fd9" containerName="registry-server" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.479873 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.482556 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.497041 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7"] Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.549701 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.549795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6pp\" (UniqueName: \"kubernetes.io/projected/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-kube-api-access-wm6pp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.549999 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.651678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.651791 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.651874 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6pp\" (UniqueName: \"kubernetes.io/projected/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-kube-api-access-wm6pp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.652390 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.652395 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.672122 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6pp\" (UniqueName: \"kubernetes.io/projected/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-kube-api-access-wm6pp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:14 crc kubenswrapper[4746]: I0129 16:50:14.797832 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:15 crc kubenswrapper[4746]: I0129 16:50:15.263326 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7"] Jan 29 16:50:15 crc kubenswrapper[4746]: I0129 16:50:15.552883 4746 generic.go:334] "Generic (PLEG): container finished" podID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerID="4c15b7711e4b3426622459b34da2a8cd81c1dda47b547c6b97c60fd5b9d59118" exitCode=0 Jan 29 16:50:15 crc kubenswrapper[4746]: I0129 16:50:15.552928 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" event={"ID":"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0","Type":"ContainerDied","Data":"4c15b7711e4b3426622459b34da2a8cd81c1dda47b547c6b97c60fd5b9d59118"} Jan 29 16:50:15 crc kubenswrapper[4746]: I0129 16:50:15.552956 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" event={"ID":"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0","Type":"ContainerStarted","Data":"214fcbd3cccddca1cfa60901b2c288c2427f770065f8104c4784c13583b02abe"} Jan 29 16:50:18 crc kubenswrapper[4746]: I0129 16:50:18.583625 4746 generic.go:334] "Generic (PLEG): container finished" podID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerID="447bbf2f2dbb491cc91ac4b11685c3af33a5f269a83ea38ea5986cc858a6109a" exitCode=0 Jan 29 16:50:18 crc kubenswrapper[4746]: I0129 16:50:18.583707 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" event={"ID":"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0","Type":"ContainerDied","Data":"447bbf2f2dbb491cc91ac4b11685c3af33a5f269a83ea38ea5986cc858a6109a"} Jan 29 16:50:19 crc kubenswrapper[4746]: I0129 16:50:19.065426 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:50:19 crc kubenswrapper[4746]: I0129 16:50:19.065502 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:50:19 crc kubenswrapper[4746]: I0129 16:50:19.592260 4746 generic.go:334] "Generic (PLEG): container finished" podID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerID="2928d000025e002e7a8c9477a581c946bbae938aa48e7fae2d2f1932d010c40e" exitCode=0 Jan 29 16:50:19 crc kubenswrapper[4746]: I0129 16:50:19.592351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" event={"ID":"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0","Type":"ContainerDied","Data":"2928d000025e002e7a8c9477a581c946bbae938aa48e7fae2d2f1932d010c40e"} Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.849542 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.932928 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-bundle\") pod \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.933006 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-util\") pod \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.933043 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6pp\" (UniqueName: \"kubernetes.io/projected/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-kube-api-access-wm6pp\") pod \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\" (UID: \"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0\") " Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.934644 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-bundle" (OuterVolumeSpecName: "bundle") pod "e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" (UID: "e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.939578 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-kube-api-access-wm6pp" (OuterVolumeSpecName: "kube-api-access-wm6pp") pod "e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" (UID: "e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0"). InnerVolumeSpecName "kube-api-access-wm6pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:50:20 crc kubenswrapper[4746]: I0129 16:50:20.949167 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-util" (OuterVolumeSpecName: "util") pod "e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" (UID: "e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:50:21 crc kubenswrapper[4746]: I0129 16:50:21.034244 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:21 crc kubenswrapper[4746]: I0129 16:50:21.034455 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:21 crc kubenswrapper[4746]: I0129 16:50:21.034532 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6pp\" (UniqueName: \"kubernetes.io/projected/e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0-kube-api-access-wm6pp\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:21 crc kubenswrapper[4746]: I0129 16:50:21.607322 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" event={"ID":"e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0","Type":"ContainerDied","Data":"214fcbd3cccddca1cfa60901b2c288c2427f770065f8104c4784c13583b02abe"} Jan 29 16:50:21 crc kubenswrapper[4746]: I0129 16:50:21.607367 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214fcbd3cccddca1cfa60901b2c288c2427f770065f8104c4784c13583b02abe" Jan 29 16:50:21 crc kubenswrapper[4746]: I0129 16:50:21.607391 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.833694 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs"] Jan 29 16:50:28 crc kubenswrapper[4746]: E0129 16:50:28.834712 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="extract" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.834738 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="extract" Jan 29 16:50:28 crc kubenswrapper[4746]: E0129 16:50:28.834797 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="util" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.834811 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="util" Jan 29 16:50:28 crc kubenswrapper[4746]: E0129 16:50:28.834834 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="pull" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.834847 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="pull" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.835062 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0" containerName="extract" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.835831 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.837884 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.838128 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-qzm29" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.838126 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.863786 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs"] Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.944267 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b160d2-d505-4adc-9ca2-7bb12fa694df-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fgdbs\" (UID: \"67b160d2-d505-4adc-9ca2-7bb12fa694df\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:28 crc kubenswrapper[4746]: I0129 16:50:28.944319 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4cmw\" (UniqueName: \"kubernetes.io/projected/67b160d2-d505-4adc-9ca2-7bb12fa694df-kube-api-access-w4cmw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fgdbs\" (UID: \"67b160d2-d505-4adc-9ca2-7bb12fa694df\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.045453 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b160d2-d505-4adc-9ca2-7bb12fa694df-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fgdbs\" (UID: \"67b160d2-d505-4adc-9ca2-7bb12fa694df\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.045509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4cmw\" (UniqueName: \"kubernetes.io/projected/67b160d2-d505-4adc-9ca2-7bb12fa694df-kube-api-access-w4cmw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fgdbs\" (UID: \"67b160d2-d505-4adc-9ca2-7bb12fa694df\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.046144 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/67b160d2-d505-4adc-9ca2-7bb12fa694df-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fgdbs\" (UID: \"67b160d2-d505-4adc-9ca2-7bb12fa694df\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.080919 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4cmw\" (UniqueName: \"kubernetes.io/projected/67b160d2-d505-4adc-9ca2-7bb12fa694df-kube-api-access-w4cmw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-fgdbs\" (UID: \"67b160d2-d505-4adc-9ca2-7bb12fa694df\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.157539 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.575037 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs"] Jan 29 16:50:29 crc kubenswrapper[4746]: I0129 16:50:29.651468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" event={"ID":"67b160d2-d505-4adc-9ca2-7bb12fa694df","Type":"ContainerStarted","Data":"0ebd912f1c2b33f2a8b303e91c8f4ba33888b5236000682a77ba63199449b154"} Jan 29 16:50:33 crc kubenswrapper[4746]: I0129 16:50:33.675124 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" event={"ID":"67b160d2-d505-4adc-9ca2-7bb12fa694df","Type":"ContainerStarted","Data":"86abba95ad46d6fee807a2269fa822a8642a90021e47f2858a36c76c77299e4c"} Jan 29 16:50:33 crc kubenswrapper[4746]: I0129 16:50:33.697403 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-fgdbs" podStartSLOduration=2.7458138869999997 podStartE2EDuration="5.697388612s" podCreationTimestamp="2026-01-29 16:50:28 +0000 UTC" firstStartedPulling="2026-01-29 16:50:29.577754426 +0000 UTC m=+951.978339080" lastFinishedPulling="2026-01-29 16:50:32.529329141 +0000 UTC m=+954.929913805" observedRunningTime="2026-01-29 16:50:33.694969668 +0000 UTC m=+956.095554312" watchObservedRunningTime="2026-01-29 16:50:33.697388612 +0000 UTC m=+956.097973256" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.434827 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nz8jz"] Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.436582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.449315 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nz8jz"] Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.551369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-utilities\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.551455 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wm4\" (UniqueName: \"kubernetes.io/projected/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-kube-api-access-57wm4\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.551558 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-catalog-content\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.652566 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-utilities\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.652633 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57wm4\" (UniqueName: \"kubernetes.io/projected/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-kube-api-access-57wm4\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.652681 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-catalog-content\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.653260 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-catalog-content\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.653463 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-utilities\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.690676 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wm4\" (UniqueName: \"kubernetes.io/projected/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-kube-api-access-57wm4\") pod \"certified-operators-nz8jz\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:35 crc kubenswrapper[4746]: I0129 16:50:35.755513 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:36 crc kubenswrapper[4746]: I0129 16:50:36.000206 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nz8jz"] Jan 29 16:50:36 crc kubenswrapper[4746]: I0129 16:50:36.696043 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz8jz" event={"ID":"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5","Type":"ContainerStarted","Data":"c95394ca28a3ff5361ff4a04eb815520156c8b2d18b03a25176a6c5c3999cab6"} Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.812502 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-qf9b8"] Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.813342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.818700 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.818781 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c77kg" Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.818845 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.830512 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-qf9b8"] Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.986534 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdp9\" (UniqueName: \"kubernetes.io/projected/6c6fa8e6-a609-4dd5-896f-a2e6d134b671-kube-api-access-5gdp9\") pod \"cert-manager-webhook-6888856db4-qf9b8\" (UID: \"6c6fa8e6-a609-4dd5-896f-a2e6d134b671\") " pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:37 crc kubenswrapper[4746]: I0129 16:50:37.986686 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6fa8e6-a609-4dd5-896f-a2e6d134b671-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-qf9b8\" (UID: \"6c6fa8e6-a609-4dd5-896f-a2e6d134b671\") " pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.087934 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6fa8e6-a609-4dd5-896f-a2e6d134b671-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-qf9b8\" (UID: \"6c6fa8e6-a609-4dd5-896f-a2e6d134b671\") " pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.088023 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gdp9\" (UniqueName: \"kubernetes.io/projected/6c6fa8e6-a609-4dd5-896f-a2e6d134b671-kube-api-access-5gdp9\") pod \"cert-manager-webhook-6888856db4-qf9b8\" (UID: \"6c6fa8e6-a609-4dd5-896f-a2e6d134b671\") " pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.113512 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gdp9\" (UniqueName: \"kubernetes.io/projected/6c6fa8e6-a609-4dd5-896f-a2e6d134b671-kube-api-access-5gdp9\") pod \"cert-manager-webhook-6888856db4-qf9b8\" (UID: \"6c6fa8e6-a609-4dd5-896f-a2e6d134b671\") " pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.117250 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6fa8e6-a609-4dd5-896f-a2e6d134b671-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-qf9b8\" (UID: \"6c6fa8e6-a609-4dd5-896f-a2e6d134b671\") " pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.127792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.221763 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n7fr7"] Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.222568 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.224807 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vxkmx" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.235900 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n7fr7"] Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.364197 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-qf9b8"] Jan 29 16:50:38 crc kubenswrapper[4746]: W0129 16:50:38.371308 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6fa8e6_a609_4dd5_896f_a2e6d134b671.slice/crio-a51fc2d2a8169d25bc8783908e87aa54e28c2c0460321f761e500145e0d6fe19 WatchSource:0}: Error finding container a51fc2d2a8169d25bc8783908e87aa54e28c2c0460321f761e500145e0d6fe19: Status 404 returned error can't find the container with id a51fc2d2a8169d25bc8783908e87aa54e28c2c0460321f761e500145e0d6fe19 Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.391732 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghm5f\" (UniqueName: \"kubernetes.io/projected/5cff161f-24ec-499d-bcef-c964f4b40972-kube-api-access-ghm5f\") pod \"cert-manager-cainjector-5545bd876-n7fr7\" (UID: \"5cff161f-24ec-499d-bcef-c964f4b40972\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.392002 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cff161f-24ec-499d-bcef-c964f4b40972-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n7fr7\" (UID: \"5cff161f-24ec-499d-bcef-c964f4b40972\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.493781 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cff161f-24ec-499d-bcef-c964f4b40972-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n7fr7\" (UID: \"5cff161f-24ec-499d-bcef-c964f4b40972\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.494068 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghm5f\" (UniqueName: \"kubernetes.io/projected/5cff161f-24ec-499d-bcef-c964f4b40972-kube-api-access-ghm5f\") pod \"cert-manager-cainjector-5545bd876-n7fr7\" (UID: \"5cff161f-24ec-499d-bcef-c964f4b40972\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.508812 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cff161f-24ec-499d-bcef-c964f4b40972-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n7fr7\" (UID: \"5cff161f-24ec-499d-bcef-c964f4b40972\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.509569 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghm5f\" (UniqueName: \"kubernetes.io/projected/5cff161f-24ec-499d-bcef-c964f4b40972-kube-api-access-ghm5f\") pod \"cert-manager-cainjector-5545bd876-n7fr7\" (UID: \"5cff161f-24ec-499d-bcef-c964f4b40972\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.540507 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vxkmx" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.550022 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.705578 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" event={"ID":"6c6fa8e6-a609-4dd5-896f-a2e6d134b671","Type":"ContainerStarted","Data":"a51fc2d2a8169d25bc8783908e87aa54e28c2c0460321f761e500145e0d6fe19"} Jan 29 16:50:38 crc kubenswrapper[4746]: I0129 16:50:38.927458 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n7fr7"] Jan 29 16:50:38 crc kubenswrapper[4746]: W0129 16:50:38.932174 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cff161f_24ec_499d_bcef_c964f4b40972.slice/crio-74c2dce248619110e7a8e24d85dbffc2b1acaac1d7a98997c32ee08d90e90c88 WatchSource:0}: Error finding container 74c2dce248619110e7a8e24d85dbffc2b1acaac1d7a98997c32ee08d90e90c88: Status 404 returned error can't find the container with id 74c2dce248619110e7a8e24d85dbffc2b1acaac1d7a98997c32ee08d90e90c88 Jan 29 16:50:39 crc kubenswrapper[4746]: I0129 16:50:39.714044 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" event={"ID":"5cff161f-24ec-499d-bcef-c964f4b40972","Type":"ContainerStarted","Data":"74c2dce248619110e7a8e24d85dbffc2b1acaac1d7a98997c32ee08d90e90c88"} Jan 29 16:50:41 crc kubenswrapper[4746]: I0129 16:50:41.728810 4746 generic.go:334] "Generic (PLEG): container finished" podID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerID="e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf" exitCode=0 Jan 29 16:50:41 crc kubenswrapper[4746]: I0129 16:50:41.728871 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz8jz" event={"ID":"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5","Type":"ContainerDied","Data":"e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf"} Jan 29 16:50:42 crc kubenswrapper[4746]: I0129 16:50:42.737416 4746 generic.go:334] "Generic (PLEG): container finished" podID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerID="51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711" exitCode=0 Jan 29 16:50:42 crc kubenswrapper[4746]: I0129 16:50:42.737625 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz8jz" event={"ID":"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5","Type":"ContainerDied","Data":"51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711"} Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.751942 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" event={"ID":"5cff161f-24ec-499d-bcef-c964f4b40972","Type":"ContainerStarted","Data":"1393273ecee64f5b516fdbedc5f009c8482b43a6f8295e7a6506ca9c8eca493c"} Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.753905 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz8jz" event={"ID":"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5","Type":"ContainerStarted","Data":"c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc"} Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.755926 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" event={"ID":"6c6fa8e6-a609-4dd5-896f-a2e6d134b671","Type":"ContainerStarted","Data":"eb08ba55b87e31ae3dca7e9fe17cb7327a40f80ba5c0041abe6fb97e90dba723"} Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.756131 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.795064 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-n7fr7" podStartSLOduration=1.492583346 podStartE2EDuration="6.795044967s" podCreationTimestamp="2026-01-29 16:50:38 +0000 UTC" firstStartedPulling="2026-01-29 16:50:38.937884862 +0000 UTC m=+961.338469506" lastFinishedPulling="2026-01-29 16:50:44.240346483 +0000 UTC m=+966.640931127" observedRunningTime="2026-01-29 16:50:44.774513241 +0000 UTC m=+967.175097885" watchObservedRunningTime="2026-01-29 16:50:44.795044967 +0000 UTC m=+967.195629621" Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.821664 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nz8jz" podStartSLOduration=7.312513059 podStartE2EDuration="9.821643056s" podCreationTimestamp="2026-01-29 16:50:35 +0000 UTC" firstStartedPulling="2026-01-29 16:50:41.731229876 +0000 UTC m=+964.131814530" lastFinishedPulling="2026-01-29 16:50:44.240359883 +0000 UTC m=+966.640944527" observedRunningTime="2026-01-29 16:50:44.821443991 +0000 UTC m=+967.222028635" watchObservedRunningTime="2026-01-29 16:50:44.821643056 +0000 UTC m=+967.222227710" Jan 29 16:50:44 crc kubenswrapper[4746]: I0129 16:50:44.825631 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" podStartSLOduration=1.9458295140000001 podStartE2EDuration="7.825608461s" podCreationTimestamp="2026-01-29 16:50:37 +0000 UTC" firstStartedPulling="2026-01-29 16:50:38.373751897 +0000 UTC m=+960.774336541" lastFinishedPulling="2026-01-29 16:50:44.253530844 +0000 UTC m=+966.654115488" observedRunningTime="2026-01-29 16:50:44.795382546 +0000 UTC m=+967.195967190" watchObservedRunningTime="2026-01-29 16:50:44.825608461 +0000 UTC m=+967.226193125" Jan 29 16:50:45 crc kubenswrapper[4746]: I0129 16:50:45.756997 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:45 crc kubenswrapper[4746]: I0129 16:50:45.757281 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:46 crc kubenswrapper[4746]: I0129 16:50:46.800612 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nz8jz" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="registry-server" probeResult="failure" output=< Jan 29 16:50:46 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 29 16:50:46 crc kubenswrapper[4746]: > Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.065833 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.065909 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.065961 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.066644 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3638d7699d354888da89723ea0a7801e67c37af27cf4d7fc2d221d9637b01dae"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.066718 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://3638d7699d354888da89723ea0a7801e67c37af27cf4d7fc2d221d9637b01dae" gracePeriod=600 Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.789106 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="3638d7699d354888da89723ea0a7801e67c37af27cf4d7fc2d221d9637b01dae" exitCode=0 Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.789169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"3638d7699d354888da89723ea0a7801e67c37af27cf4d7fc2d221d9637b01dae"} Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.789634 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"f497afed52a8e95c6830b33adef89933088f61ef0f396f26bc62e5bc61330609"} Jan 29 16:50:49 crc kubenswrapper[4746]: I0129 16:50:49.789653 4746 scope.go:117] "RemoveContainer" containerID="f56c479e12434b65f3040982c4c1ac3c63cd76a5e1a9e343b095f96d828b1ae6" Jan 29 16:50:53 crc kubenswrapper[4746]: I0129 16:50:53.131705 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-qf9b8" Jan 29 16:50:55 crc kubenswrapper[4746]: I0129 16:50:55.797470 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:55 crc kubenswrapper[4746]: I0129 16:50:55.837931 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.025213 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nz8jz"] Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.767481 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-tl7pv"] Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.768206 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.772915 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lnqll" Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.775300 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-tl7pv"] Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.833263 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nz8jz" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="registry-server" containerID="cri-o://c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc" gracePeriod=2 Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.964027 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgpt\" (UniqueName: \"kubernetes.io/projected/d3df6da0-1959-41ce-a71b-1546e4752437-kube-api-access-blgpt\") pod \"cert-manager-545d4d4674-tl7pv\" (UID: \"d3df6da0-1959-41ce-a71b-1546e4752437\") " pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:56 crc kubenswrapper[4746]: I0129 16:50:56.964146 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3df6da0-1959-41ce-a71b-1546e4752437-bound-sa-token\") pod \"cert-manager-545d4d4674-tl7pv\" (UID: \"d3df6da0-1959-41ce-a71b-1546e4752437\") " pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.065588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgpt\" (UniqueName: \"kubernetes.io/projected/d3df6da0-1959-41ce-a71b-1546e4752437-kube-api-access-blgpt\") pod \"cert-manager-545d4d4674-tl7pv\" (UID: \"d3df6da0-1959-41ce-a71b-1546e4752437\") " pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.065877 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3df6da0-1959-41ce-a71b-1546e4752437-bound-sa-token\") pod \"cert-manager-545d4d4674-tl7pv\" (UID: \"d3df6da0-1959-41ce-a71b-1546e4752437\") " pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.087073 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgpt\" (UniqueName: \"kubernetes.io/projected/d3df6da0-1959-41ce-a71b-1546e4752437-kube-api-access-blgpt\") pod \"cert-manager-545d4d4674-tl7pv\" (UID: \"d3df6da0-1959-41ce-a71b-1546e4752437\") " pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.099120 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3df6da0-1959-41ce-a71b-1546e4752437-bound-sa-token\") pod \"cert-manager-545d4d4674-tl7pv\" (UID: \"d3df6da0-1959-41ce-a71b-1546e4752437\") " pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.304539 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.391519 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-tl7pv" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.474266 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-catalog-content\") pod \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.474891 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-utilities\") pod \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.475050 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57wm4\" (UniqueName: \"kubernetes.io/projected/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-kube-api-access-57wm4\") pod \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\" (UID: \"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5\") " Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.476074 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-utilities" (OuterVolumeSpecName: "utilities") pod "e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" (UID: "e2b1210a-6027-49aa-b6c4-c793d6b9f9f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.479334 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-kube-api-access-57wm4" (OuterVolumeSpecName: "kube-api-access-57wm4") pod "e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" (UID: "e2b1210a-6027-49aa-b6c4-c793d6b9f9f5"). InnerVolumeSpecName "kube-api-access-57wm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.521473 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" (UID: "e2b1210a-6027-49aa-b6c4-c793d6b9f9f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.576923 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.576952 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.576961 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57wm4\" (UniqueName: \"kubernetes.io/projected/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5-kube-api-access-57wm4\") on node \"crc\" DevicePath \"\"" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.775469 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-tl7pv"] Jan 29 16:50:57 crc kubenswrapper[4746]: W0129 16:50:57.779889 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3df6da0_1959_41ce_a71b_1546e4752437.slice/crio-d5590d790059b409ac810a18f22f4ba37e41ef40f594937097e53b9f7113d5b7 WatchSource:0}: Error finding container d5590d790059b409ac810a18f22f4ba37e41ef40f594937097e53b9f7113d5b7: Status 404 returned error can't find the container with id d5590d790059b409ac810a18f22f4ba37e41ef40f594937097e53b9f7113d5b7 Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.842252 4746 generic.go:334] "Generic (PLEG): container finished" podID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerID="c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc" exitCode=0 Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.842338 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nz8jz" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.842379 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz8jz" event={"ID":"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5","Type":"ContainerDied","Data":"c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc"} Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.842481 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nz8jz" event={"ID":"e2b1210a-6027-49aa-b6c4-c793d6b9f9f5","Type":"ContainerDied","Data":"c95394ca28a3ff5361ff4a04eb815520156c8b2d18b03a25176a6c5c3999cab6"} Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.842503 4746 scope.go:117] "RemoveContainer" containerID="c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.843620 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-tl7pv" event={"ID":"d3df6da0-1959-41ce-a71b-1546e4752437","Type":"ContainerStarted","Data":"d5590d790059b409ac810a18f22f4ba37e41ef40f594937097e53b9f7113d5b7"} Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.863329 4746 scope.go:117] "RemoveContainer" containerID="51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.895764 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nz8jz"] Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.897652 4746 scope.go:117] "RemoveContainer" containerID="e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.901379 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nz8jz"] Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.917166 4746 scope.go:117] "RemoveContainer" containerID="c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc" Jan 29 16:50:57 crc kubenswrapper[4746]: E0129 16:50:57.917922 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc\": container with ID starting with c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc not found: ID does not exist" containerID="c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.917957 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc"} err="failed to get container status \"c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc\": rpc error: code = NotFound desc = could not find container \"c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc\": container with ID starting with c208fc1b780f21d713120ce7843ecde1b5194915502396540d770f27bee672bc not found: ID does not exist" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.917981 4746 scope.go:117] "RemoveContainer" containerID="51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711" Jan 29 16:50:57 crc kubenswrapper[4746]: E0129 16:50:57.918231 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711\": container with ID starting with 51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711 not found: ID does not exist" containerID="51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.918259 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711"} err="failed to get container status \"51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711\": rpc error: code = NotFound desc = could not find container \"51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711\": container with ID starting with 51c5ec2c16c1887fa899144200f833c6fddff264ad735c440c6401a844280711 not found: ID does not exist" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.918279 4746 scope.go:117] "RemoveContainer" containerID="e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf" Jan 29 16:50:57 crc kubenswrapper[4746]: E0129 16:50:57.918886 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf\": container with ID starting with e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf not found: ID does not exist" containerID="e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf" Jan 29 16:50:57 crc kubenswrapper[4746]: I0129 16:50:57.918935 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf"} err="failed to get container status \"e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf\": rpc error: code = NotFound desc = could not find container \"e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf\": container with ID starting with e70163267283016c24020ccd756b4d468d5b34a7bc6feb29585a05e7a626ceaf not found: ID does not exist" Jan 29 16:50:58 crc kubenswrapper[4746]: I0129 16:50:58.456686 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" path="/var/lib/kubelet/pods/e2b1210a-6027-49aa-b6c4-c793d6b9f9f5/volumes" Jan 29 16:50:58 crc kubenswrapper[4746]: I0129 16:50:58.852619 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-tl7pv" event={"ID":"d3df6da0-1959-41ce-a71b-1546e4752437","Type":"ContainerStarted","Data":"9bcf767358bb6fd76379a1be5ea21a0fca881ab834e7da1cbd746babd8ccf76c"} Jan 29 16:50:58 crc kubenswrapper[4746]: I0129 16:50:58.880302 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-tl7pv" podStartSLOduration=2.880279937 podStartE2EDuration="2.880279937s" podCreationTimestamp="2026-01-29 16:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:50:58.876111706 +0000 UTC m=+981.276696360" watchObservedRunningTime="2026-01-29 16:50:58.880279937 +0000 UTC m=+981.280864621" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.316214 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zj6m5"] Jan 29 16:51:06 crc kubenswrapper[4746]: E0129 16:51:06.316992 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="extract-utilities" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.317009 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="extract-utilities" Jan 29 16:51:06 crc kubenswrapper[4746]: E0129 16:51:06.317024 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="registry-server" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.317033 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="registry-server" Jan 29 16:51:06 crc kubenswrapper[4746]: E0129 16:51:06.317048 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="extract-content" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.317058 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="extract-content" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.317217 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b1210a-6027-49aa-b6c4-c793d6b9f9f5" containerName="registry-server" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.317686 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.321356 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.321370 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.323816 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mxvwz" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.346269 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zj6m5"] Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.501160 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lsb\" (UniqueName: \"kubernetes.io/projected/9b36cec2-8b3d-4070-a466-ad2a785727b2-kube-api-access-r4lsb\") pod \"openstack-operator-index-zj6m5\" (UID: \"9b36cec2-8b3d-4070-a466-ad2a785727b2\") " pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.603263 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lsb\" (UniqueName: \"kubernetes.io/projected/9b36cec2-8b3d-4070-a466-ad2a785727b2-kube-api-access-r4lsb\") pod \"openstack-operator-index-zj6m5\" (UID: \"9b36cec2-8b3d-4070-a466-ad2a785727b2\") " pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.625807 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lsb\" (UniqueName: \"kubernetes.io/projected/9b36cec2-8b3d-4070-a466-ad2a785727b2-kube-api-access-r4lsb\") pod \"openstack-operator-index-zj6m5\" (UID: \"9b36cec2-8b3d-4070-a466-ad2a785727b2\") " pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.636598 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.832551 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zj6m5"] Jan 29 16:51:06 crc kubenswrapper[4746]: W0129 16:51:06.836244 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b36cec2_8b3d_4070_a466_ad2a785727b2.slice/crio-e271a7698bcbaa9f70410c3b15b0b6959f9c47ea6035af75999f08ad4a9ea9de WatchSource:0}: Error finding container e271a7698bcbaa9f70410c3b15b0b6959f9c47ea6035af75999f08ad4a9ea9de: Status 404 returned error can't find the container with id e271a7698bcbaa9f70410c3b15b0b6959f9c47ea6035af75999f08ad4a9ea9de Jan 29 16:51:06 crc kubenswrapper[4746]: I0129 16:51:06.912610 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zj6m5" event={"ID":"9b36cec2-8b3d-4070-a466-ad2a785727b2","Type":"ContainerStarted","Data":"e271a7698bcbaa9f70410c3b15b0b6959f9c47ea6035af75999f08ad4a9ea9de"} Jan 29 16:51:08 crc kubenswrapper[4746]: I0129 16:51:08.924649 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zj6m5" event={"ID":"9b36cec2-8b3d-4070-a466-ad2a785727b2","Type":"ContainerStarted","Data":"9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a"} Jan 29 16:51:08 crc kubenswrapper[4746]: I0129 16:51:08.946389 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zj6m5" podStartSLOduration=1.973373077 podStartE2EDuration="2.946343285s" podCreationTimestamp="2026-01-29 16:51:06 +0000 UTC" firstStartedPulling="2026-01-29 16:51:06.838397676 +0000 UTC m=+989.238982320" lastFinishedPulling="2026-01-29 16:51:07.811367884 +0000 UTC m=+990.211952528" observedRunningTime="2026-01-29 16:51:08.940710685 +0000 UTC m=+991.341295389" watchObservedRunningTime="2026-01-29 16:51:08.946343285 +0000 UTC m=+991.346927939" Jan 29 16:51:09 crc kubenswrapper[4746]: I0129 16:51:09.839908 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zj6m5"] Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.462738 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tnxns"] Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.464263 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.471591 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tnxns"] Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.657630 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7czh\" (UniqueName: \"kubernetes.io/projected/3819ba4d-9ba5-40e9-ada2-d444d9a80bb5-kube-api-access-l7czh\") pod \"openstack-operator-index-tnxns\" (UID: \"3819ba4d-9ba5-40e9-ada2-d444d9a80bb5\") " pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.759885 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7czh\" (UniqueName: \"kubernetes.io/projected/3819ba4d-9ba5-40e9-ada2-d444d9a80bb5-kube-api-access-l7czh\") pod \"openstack-operator-index-tnxns\" (UID: \"3819ba4d-9ba5-40e9-ada2-d444d9a80bb5\") " pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.786479 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7czh\" (UniqueName: \"kubernetes.io/projected/3819ba4d-9ba5-40e9-ada2-d444d9a80bb5-kube-api-access-l7czh\") pod \"openstack-operator-index-tnxns\" (UID: \"3819ba4d-9ba5-40e9-ada2-d444d9a80bb5\") " pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.793518 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:10 crc kubenswrapper[4746]: I0129 16:51:10.938818 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zj6m5" podUID="9b36cec2-8b3d-4070-a466-ad2a785727b2" containerName="registry-server" containerID="cri-o://9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a" gracePeriod=2 Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.032470 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tnxns"] Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.254981 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.367960 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4lsb\" (UniqueName: \"kubernetes.io/projected/9b36cec2-8b3d-4070-a466-ad2a785727b2-kube-api-access-r4lsb\") pod \"9b36cec2-8b3d-4070-a466-ad2a785727b2\" (UID: \"9b36cec2-8b3d-4070-a466-ad2a785727b2\") " Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.374651 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b36cec2-8b3d-4070-a466-ad2a785727b2-kube-api-access-r4lsb" (OuterVolumeSpecName: "kube-api-access-r4lsb") pod "9b36cec2-8b3d-4070-a466-ad2a785727b2" (UID: "9b36cec2-8b3d-4070-a466-ad2a785727b2"). InnerVolumeSpecName "kube-api-access-r4lsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.470499 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4lsb\" (UniqueName: \"kubernetes.io/projected/9b36cec2-8b3d-4070-a466-ad2a785727b2-kube-api-access-r4lsb\") on node \"crc\" DevicePath \"\"" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.958033 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tnxns" event={"ID":"3819ba4d-9ba5-40e9-ada2-d444d9a80bb5","Type":"ContainerStarted","Data":"4201c6e3ac6b3436b29dbf9b40ab0894646c57a161638bcc8038174606df415f"} Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.958588 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tnxns" event={"ID":"3819ba4d-9ba5-40e9-ada2-d444d9a80bb5","Type":"ContainerStarted","Data":"b51f20d3e09d05984191c86040c568dacc25f229685dfea655cba09d2a021730"} Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.963052 4746 generic.go:334] "Generic (PLEG): container finished" podID="9b36cec2-8b3d-4070-a466-ad2a785727b2" containerID="9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a" exitCode=0 Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.963109 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zj6m5" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.963158 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zj6m5" event={"ID":"9b36cec2-8b3d-4070-a466-ad2a785727b2","Type":"ContainerDied","Data":"9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a"} Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.963279 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zj6m5" event={"ID":"9b36cec2-8b3d-4070-a466-ad2a785727b2","Type":"ContainerDied","Data":"e271a7698bcbaa9f70410c3b15b0b6959f9c47ea6035af75999f08ad4a9ea9de"} Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.963351 4746 scope.go:117] "RemoveContainer" containerID="9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.984013 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tnxns" podStartSLOduration=1.515378667 podStartE2EDuration="1.983989531s" podCreationTimestamp="2026-01-29 16:51:10 +0000 UTC" firstStartedPulling="2026-01-29 16:51:11.041563115 +0000 UTC m=+993.442147759" lastFinishedPulling="2026-01-29 16:51:11.510173979 +0000 UTC m=+993.910758623" observedRunningTime="2026-01-29 16:51:11.977651993 +0000 UTC m=+994.378236647" watchObservedRunningTime="2026-01-29 16:51:11.983989531 +0000 UTC m=+994.384574185" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.994571 4746 scope.go:117] "RemoveContainer" containerID="9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a" Jan 29 16:51:11 crc kubenswrapper[4746]: E0129 16:51:11.995326 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a\": container with ID starting with 9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a not found: ID does not exist" containerID="9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a" Jan 29 16:51:11 crc kubenswrapper[4746]: I0129 16:51:11.995376 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a"} err="failed to get container status \"9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a\": rpc error: code = NotFound desc = could not find container \"9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a\": container with ID starting with 9edc6461b0461d9025346b35ef2bea14a73ed2f6499be5139b41f515ed6ab82a not found: ID does not exist" Jan 29 16:51:12 crc kubenswrapper[4746]: I0129 16:51:12.013456 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zj6m5"] Jan 29 16:51:12 crc kubenswrapper[4746]: I0129 16:51:12.021287 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zj6m5"] Jan 29 16:51:12 crc kubenswrapper[4746]: I0129 16:51:12.466413 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b36cec2-8b3d-4070-a466-ad2a785727b2" path="/var/lib/kubelet/pods/9b36cec2-8b3d-4070-a466-ad2a785727b2/volumes" Jan 29 16:51:20 crc kubenswrapper[4746]: I0129 16:51:20.794080 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:20 crc kubenswrapper[4746]: I0129 16:51:20.794705 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:20 crc kubenswrapper[4746]: I0129 16:51:20.831749 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:21 crc kubenswrapper[4746]: I0129 16:51:21.053455 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-tnxns" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.461562 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8"] Jan 29 16:51:28 crc kubenswrapper[4746]: E0129 16:51:28.462482 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b36cec2-8b3d-4070-a466-ad2a785727b2" containerName="registry-server" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.462496 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b36cec2-8b3d-4070-a466-ad2a785727b2" containerName="registry-server" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.462619 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b36cec2-8b3d-4070-a466-ad2a785727b2" containerName="registry-server" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.463597 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.465429 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4jtf2" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.474595 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8"] Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.609992 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kj6v\" (UniqueName: \"kubernetes.io/projected/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-kube-api-access-5kj6v\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.610074 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.610105 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.711052 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.711097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.711154 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kj6v\" (UniqueName: \"kubernetes.io/projected/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-kube-api-access-5kj6v\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.711713 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.711996 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.731992 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kj6v\" (UniqueName: \"kubernetes.io/projected/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-kube-api-access-5kj6v\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:28 crc kubenswrapper[4746]: I0129 16:51:28.780713 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:29 crc kubenswrapper[4746]: I0129 16:51:29.208576 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8"] Jan 29 16:51:29 crc kubenswrapper[4746]: W0129 16:51:29.213446 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca75f8d_6a20_4e49_8d94_a5f7159239cd.slice/crio-79d08dc114f322cd0757a551a8579e899c4417756e5ed77cd75c84905f78966e WatchSource:0}: Error finding container 79d08dc114f322cd0757a551a8579e899c4417756e5ed77cd75c84905f78966e: Status 404 returned error can't find the container with id 79d08dc114f322cd0757a551a8579e899c4417756e5ed77cd75c84905f78966e Jan 29 16:51:30 crc kubenswrapper[4746]: I0129 16:51:30.088165 4746 generic.go:334] "Generic (PLEG): container finished" podID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerID="b73db50816cd13603e327dd2133c78c5705d9b88dc1a27b84f519bd7c1c34886" exitCode=0 Jan 29 16:51:30 crc kubenswrapper[4746]: I0129 16:51:30.088235 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" event={"ID":"7ca75f8d-6a20-4e49-8d94-a5f7159239cd","Type":"ContainerDied","Data":"b73db50816cd13603e327dd2133c78c5705d9b88dc1a27b84f519bd7c1c34886"} Jan 29 16:51:30 crc kubenswrapper[4746]: I0129 16:51:30.088499 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" event={"ID":"7ca75f8d-6a20-4e49-8d94-a5f7159239cd","Type":"ContainerStarted","Data":"79d08dc114f322cd0757a551a8579e899c4417756e5ed77cd75c84905f78966e"} Jan 29 16:51:31 crc kubenswrapper[4746]: I0129 16:51:31.099133 4746 generic.go:334] "Generic (PLEG): container finished" podID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerID="36049e865a65095c60a3c3d96acb7e91bf6a86b896c1633024e747b7ae6c2f41" exitCode=0 Jan 29 16:51:31 crc kubenswrapper[4746]: I0129 16:51:31.099215 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" event={"ID":"7ca75f8d-6a20-4e49-8d94-a5f7159239cd","Type":"ContainerDied","Data":"36049e865a65095c60a3c3d96acb7e91bf6a86b896c1633024e747b7ae6c2f41"} Jan 29 16:51:31 crc kubenswrapper[4746]: E0129 16:51:31.318079 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ca75f8d_6a20_4e49_8d94_a5f7159239cd.slice/crio-conmon-f3e041251fc00246a413df28c4b8af06e9c2a958fceaefad1222e23cb1ee9675.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:51:32 crc kubenswrapper[4746]: I0129 16:51:32.110314 4746 generic.go:334] "Generic (PLEG): container finished" podID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerID="f3e041251fc00246a413df28c4b8af06e9c2a958fceaefad1222e23cb1ee9675" exitCode=0 Jan 29 16:51:32 crc kubenswrapper[4746]: I0129 16:51:32.110356 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" event={"ID":"7ca75f8d-6a20-4e49-8d94-a5f7159239cd","Type":"ContainerDied","Data":"f3e041251fc00246a413df28c4b8af06e9c2a958fceaefad1222e23cb1ee9675"} Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.407119 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.574724 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-bundle\") pod \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.575049 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kj6v\" (UniqueName: \"kubernetes.io/projected/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-kube-api-access-5kj6v\") pod \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.575099 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-util\") pod \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\" (UID: \"7ca75f8d-6a20-4e49-8d94-a5f7159239cd\") " Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.575944 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-bundle" (OuterVolumeSpecName: "bundle") pod "7ca75f8d-6a20-4e49-8d94-a5f7159239cd" (UID: "7ca75f8d-6a20-4e49-8d94-a5f7159239cd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.581567 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-kube-api-access-5kj6v" (OuterVolumeSpecName: "kube-api-access-5kj6v") pod "7ca75f8d-6a20-4e49-8d94-a5f7159239cd" (UID: "7ca75f8d-6a20-4e49-8d94-a5f7159239cd"). InnerVolumeSpecName "kube-api-access-5kj6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.589463 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-util" (OuterVolumeSpecName: "util") pod "7ca75f8d-6a20-4e49-8d94-a5f7159239cd" (UID: "7ca75f8d-6a20-4e49-8d94-a5f7159239cd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.676507 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.676553 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:51:33 crc kubenswrapper[4746]: I0129 16:51:33.676566 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kj6v\" (UniqueName: \"kubernetes.io/projected/7ca75f8d-6a20-4e49-8d94-a5f7159239cd-kube-api-access-5kj6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:51:34 crc kubenswrapper[4746]: I0129 16:51:34.126498 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" event={"ID":"7ca75f8d-6a20-4e49-8d94-a5f7159239cd","Type":"ContainerDied","Data":"79d08dc114f322cd0757a551a8579e899c4417756e5ed77cd75c84905f78966e"} Jan 29 16:51:34 crc kubenswrapper[4746]: I0129 16:51:34.126544 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d08dc114f322cd0757a551a8579e899c4417756e5ed77cd75c84905f78966e" Jan 29 16:51:34 crc kubenswrapper[4746]: I0129 16:51:34.126576 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.540501 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8"] Jan 29 16:51:40 crc kubenswrapper[4746]: E0129 16:51:40.541391 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="util" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.541410 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="util" Jan 29 16:51:40 crc kubenswrapper[4746]: E0129 16:51:40.541424 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="extract" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.541431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="extract" Jan 29 16:51:40 crc kubenswrapper[4746]: E0129 16:51:40.541457 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="pull" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.541465 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="pull" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.541599 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca75f8d-6a20-4e49-8d94-a5f7159239cd" containerName="extract" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.541989 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.544635 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jpb84" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.575777 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6btb5\" (UniqueName: \"kubernetes.io/projected/ecd5da6c-5219-47b2-b290-402277b01934-kube-api-access-6btb5\") pod \"openstack-operator-controller-init-757f46c65d-znnl8\" (UID: \"ecd5da6c-5219-47b2-b290-402277b01934\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.607241 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8"] Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.676968 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6btb5\" (UniqueName: \"kubernetes.io/projected/ecd5da6c-5219-47b2-b290-402277b01934-kube-api-access-6btb5\") pod \"openstack-operator-controller-init-757f46c65d-znnl8\" (UID: \"ecd5da6c-5219-47b2-b290-402277b01934\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.696034 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6btb5\" (UniqueName: \"kubernetes.io/projected/ecd5da6c-5219-47b2-b290-402277b01934-kube-api-access-6btb5\") pod \"openstack-operator-controller-init-757f46c65d-znnl8\" (UID: \"ecd5da6c-5219-47b2-b290-402277b01934\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:51:40 crc kubenswrapper[4746]: I0129 16:51:40.859795 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:51:41 crc kubenswrapper[4746]: I0129 16:51:41.303584 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8"] Jan 29 16:51:42 crc kubenswrapper[4746]: I0129 16:51:42.177393 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" event={"ID":"ecd5da6c-5219-47b2-b290-402277b01934","Type":"ContainerStarted","Data":"deb02ed9014800d6050565658fa1ff3444f6e7ecd99ab5be688f7138f1927ba8"} Jan 29 16:51:45 crc kubenswrapper[4746]: I0129 16:51:45.200947 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" event={"ID":"ecd5da6c-5219-47b2-b290-402277b01934","Type":"ContainerStarted","Data":"6ad105f2edf6c33558110f5f9963d2e3c986f74fe9c99f6c3c8d19c04714a23e"} Jan 29 16:51:45 crc kubenswrapper[4746]: I0129 16:51:45.201564 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:51:45 crc kubenswrapper[4746]: I0129 16:51:45.232003 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" podStartSLOduration=1.593086706 podStartE2EDuration="5.231986456s" podCreationTimestamp="2026-01-29 16:51:40 +0000 UTC" firstStartedPulling="2026-01-29 16:51:41.313039292 +0000 UTC m=+1023.713623936" lastFinishedPulling="2026-01-29 16:51:44.951939032 +0000 UTC m=+1027.352523686" observedRunningTime="2026-01-29 16:51:45.227111376 +0000 UTC m=+1027.627696030" watchObservedRunningTime="2026-01-29 16:51:45.231986456 +0000 UTC m=+1027.632571100" Jan 29 16:51:50 crc kubenswrapper[4746]: I0129 16:51:50.864710 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-znnl8" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.746962 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.749295 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.753956 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mdv9v" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.755960 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.756901 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.762668 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fnjvs" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.770893 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.776898 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.778025 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.781304 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5vnr7" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.784213 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.789712 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-95h56"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.790779 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.792387 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-r6x22" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.802942 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.810923 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-95h56"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.829245 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.830001 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.833921 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6gpvj" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.844344 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.845550 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.854701 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bt4j5" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.860179 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.868251 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.876286 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-q94c9"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.877130 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.879705 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vxc6l" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.880032 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.898581 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-q94c9"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.920541 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.921555 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.926156 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.926509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzhj\" (UniqueName: \"kubernetes.io/projected/eaf582f0-a5ab-4ec3-8171-d7800c624ef9-kube-api-access-plzhj\") pod \"designate-operator-controller-manager-6d9697b7f4-kgx8p\" (UID: \"eaf582f0-a5ab-4ec3-8171-d7800c624ef9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.926594 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhj96\" (UniqueName: \"kubernetes.io/projected/40e94942-ca30-4f19-b3bb-0dd32a419bb4-kube-api-access-vhj96\") pod \"cinder-operator-controller-manager-8d874c8fc-52kgv\" (UID: \"40e94942-ca30-4f19-b3bb-0dd32a419bb4\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.926658 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrjf\" (UniqueName: \"kubernetes.io/projected/cc3d7c3e-3d38-43f3-92ce-e4696ed6e776-kube-api-access-dsrjf\") pod \"glance-operator-controller-manager-8886f4c47-95h56\" (UID: \"cc3d7c3e-3d38-43f3-92ce-e4696ed6e776\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.926697 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7fd\" (UniqueName: \"kubernetes.io/projected/dd135c2c-9e2d-434e-82b0-8f5a8bbd0123-kube-api-access-rk7fd\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-2bww8\" (UID: \"dd135c2c-9e2d-434e-82b0-8f5a8bbd0123\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.926946 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.933619 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-75ggg" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.933705 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qz5rv" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.947296 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.958817 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.959644 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.969167 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gtp7c" Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.974356 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h"] Jan 29 16:52:27 crc kubenswrapper[4746]: I0129 16:52:27.996319 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.003641 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.004509 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.006688 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-72mk7" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.030884 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.036571 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7fd\" (UniqueName: \"kubernetes.io/projected/dd135c2c-9e2d-434e-82b0-8f5a8bbd0123-kube-api-access-rk7fd\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-2bww8\" (UID: \"dd135c2c-9e2d-434e-82b0-8f5a8bbd0123\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.036615 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjtv\" (UniqueName: \"kubernetes.io/projected/d72b4019-1caf-4f5e-8324-790ff6d0c4b1-kube-api-access-khjtv\") pod \"keystone-operator-controller-manager-84f48565d4-l886h\" (UID: \"d72b4019-1caf-4f5e-8324-790ff6d0c4b1\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.036667 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kd5\" (UniqueName: \"kubernetes.io/projected/26375595-f5e1-4568-ac8b-8db08398d97a-kube-api-access-z8kd5\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040325 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzhj\" (UniqueName: \"kubernetes.io/projected/eaf582f0-a5ab-4ec3-8171-d7800c624ef9-kube-api-access-plzhj\") pod \"designate-operator-controller-manager-6d9697b7f4-kgx8p\" (UID: \"eaf582f0-a5ab-4ec3-8171-d7800c624ef9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040379 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhj96\" (UniqueName: \"kubernetes.io/projected/40e94942-ca30-4f19-b3bb-0dd32a419bb4-kube-api-access-vhj96\") pod \"cinder-operator-controller-manager-8d874c8fc-52kgv\" (UID: \"40e94942-ca30-4f19-b3bb-0dd32a419bb4\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040405 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040425 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzncr\" (UniqueName: \"kubernetes.io/projected/8d781faa-902f-41cf-ab3a-ad07d2322345-kube-api-access-gzncr\") pod \"heat-operator-controller-manager-69d6db494d-lzcpg\" (UID: \"8d781faa-902f-41cf-ab3a-ad07d2322345\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040450 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrjf\" (UniqueName: \"kubernetes.io/projected/cc3d7c3e-3d38-43f3-92ce-e4696ed6e776-kube-api-access-dsrjf\") pod \"glance-operator-controller-manager-8886f4c47-95h56\" (UID: \"cc3d7c3e-3d38-43f3-92ce-e4696ed6e776\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040467 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kxm\" (UniqueName: \"kubernetes.io/projected/7629581f-7c9b-4b2a-9296-3afc1abca26c-kube-api-access-k6kxm\") pod \"horizon-operator-controller-manager-5fb775575f-9r8qw\" (UID: \"7629581f-7c9b-4b2a-9296-3afc1abca26c\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.040489 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmpps\" (UniqueName: \"kubernetes.io/projected/50aedec4-5794-4aac-ae6d-32c393128b8b-kube-api-access-xmpps\") pod \"ironic-operator-controller-manager-5f4b8bd54d-xqxxr\" (UID: \"50aedec4-5794-4aac-ae6d-32c393128b8b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.037454 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.041306 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.042088 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.042951 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.045330 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qd28b" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.046081 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2n8pm" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.056169 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.067775 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.071143 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrjf\" (UniqueName: \"kubernetes.io/projected/cc3d7c3e-3d38-43f3-92ce-e4696ed6e776-kube-api-access-dsrjf\") pod \"glance-operator-controller-manager-8886f4c47-95h56\" (UID: \"cc3d7c3e-3d38-43f3-92ce-e4696ed6e776\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.084780 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.085630 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.088915 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzhj\" (UniqueName: \"kubernetes.io/projected/eaf582f0-a5ab-4ec3-8171-d7800c624ef9-kube-api-access-plzhj\") pod \"designate-operator-controller-manager-6d9697b7f4-kgx8p\" (UID: \"eaf582f0-a5ab-4ec3-8171-d7800c624ef9\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.091900 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7fd\" (UniqueName: \"kubernetes.io/projected/dd135c2c-9e2d-434e-82b0-8f5a8bbd0123-kube-api-access-rk7fd\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-2bww8\" (UID: \"dd135c2c-9e2d-434e-82b0-8f5a8bbd0123\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.093987 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhj96\" (UniqueName: \"kubernetes.io/projected/40e94942-ca30-4f19-b3bb-0dd32a419bb4-kube-api-access-vhj96\") pod \"cinder-operator-controller-manager-8d874c8fc-52kgv\" (UID: \"40e94942-ca30-4f19-b3bb-0dd32a419bb4\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.111080 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vwrgv" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.111382 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.112250 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.134634 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144715 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9k4\" (UniqueName: \"kubernetes.io/projected/c61df0fc-95b5-4b58-9801-75325f20e182-kube-api-access-2q9k4\") pod \"mariadb-operator-controller-manager-67bf948998-74gtz\" (UID: \"c61df0fc-95b5-4b58-9801-75325f20e182\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144786 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8kd5\" (UniqueName: \"kubernetes.io/projected/26375595-f5e1-4568-ac8b-8db08398d97a-kube-api-access-z8kd5\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144853 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crccl\" (UniqueName: \"kubernetes.io/projected/df82c829-be95-477f-a566-2dec382d4598-kube-api-access-crccl\") pod \"manila-operator-controller-manager-7dd968899f-ggshz\" (UID: \"df82c829-be95-477f-a566-2dec382d4598\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144888 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144915 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzncr\" (UniqueName: \"kubernetes.io/projected/8d781faa-902f-41cf-ab3a-ad07d2322345-kube-api-access-gzncr\") pod \"heat-operator-controller-manager-69d6db494d-lzcpg\" (UID: \"8d781faa-902f-41cf-ab3a-ad07d2322345\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144948 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kxm\" (UniqueName: \"kubernetes.io/projected/7629581f-7c9b-4b2a-9296-3afc1abca26c-kube-api-access-k6kxm\") pod \"horizon-operator-controller-manager-5fb775575f-9r8qw\" (UID: \"7629581f-7c9b-4b2a-9296-3afc1abca26c\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.144984 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmpps\" (UniqueName: \"kubernetes.io/projected/50aedec4-5794-4aac-ae6d-32c393128b8b-kube-api-access-xmpps\") pod \"ironic-operator-controller-manager-5f4b8bd54d-xqxxr\" (UID: \"50aedec4-5794-4aac-ae6d-32c393128b8b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.145015 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjtv\" (UniqueName: \"kubernetes.io/projected/d72b4019-1caf-4f5e-8324-790ff6d0c4b1-kube-api-access-khjtv\") pod \"keystone-operator-controller-manager-84f48565d4-l886h\" (UID: \"d72b4019-1caf-4f5e-8324-790ff6d0c4b1\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.148489 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.148548 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert podName:26375595-f5e1-4568-ac8b-8db08398d97a nodeName:}" failed. No retries permitted until 2026-01-29 16:52:28.648529428 +0000 UTC m=+1071.049114072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert") pod "infra-operator-controller-manager-79955696d6-q94c9" (UID: "26375595-f5e1-4568-ac8b-8db08398d97a") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.167604 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.168518 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjtv\" (UniqueName: \"kubernetes.io/projected/d72b4019-1caf-4f5e-8324-790ff6d0c4b1-kube-api-access-khjtv\") pod \"keystone-operator-controller-manager-84f48565d4-l886h\" (UID: \"d72b4019-1caf-4f5e-8324-790ff6d0c4b1\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.173687 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8kd5\" (UniqueName: \"kubernetes.io/projected/26375595-f5e1-4568-ac8b-8db08398d97a-kube-api-access-z8kd5\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.170513 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.178826 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-9r57c" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.182442 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kxm\" (UniqueName: \"kubernetes.io/projected/7629581f-7c9b-4b2a-9296-3afc1abca26c-kube-api-access-k6kxm\") pod \"horizon-operator-controller-manager-5fb775575f-9r8qw\" (UID: \"7629581f-7c9b-4b2a-9296-3afc1abca26c\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.183822 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmpps\" (UniqueName: \"kubernetes.io/projected/50aedec4-5794-4aac-ae6d-32c393128b8b-kube-api-access-xmpps\") pod \"ironic-operator-controller-manager-5f4b8bd54d-xqxxr\" (UID: \"50aedec4-5794-4aac-ae6d-32c393128b8b\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.188783 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzncr\" (UniqueName: \"kubernetes.io/projected/8d781faa-902f-41cf-ab3a-ad07d2322345-kube-api-access-gzncr\") pod \"heat-operator-controller-manager-69d6db494d-lzcpg\" (UID: \"8d781faa-902f-41cf-ab3a-ad07d2322345\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.197304 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.201769 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.203348 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ks6rw" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.207005 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.210029 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.217275 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.218404 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.220487 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.221377 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-c8dlx" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.232163 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.244564 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.245357 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.246218 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mz7j\" (UniqueName: \"kubernetes.io/projected/4427df73-583c-45ea-b592-0d282ac0b2d7-kube-api-access-9mz7j\") pod \"octavia-operator-controller-manager-6687f8d877-h6pgd\" (UID: \"4427df73-583c-45ea-b592-0d282ac0b2d7\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.246269 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q9k4\" (UniqueName: \"kubernetes.io/projected/c61df0fc-95b5-4b58-9801-75325f20e182-kube-api-access-2q9k4\") pod \"mariadb-operator-controller-manager-67bf948998-74gtz\" (UID: \"c61df0fc-95b5-4b58-9801-75325f20e182\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.246308 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w88jq\" (UniqueName: \"kubernetes.io/projected/ada28b6b-5615-4bc1-ba2e-f1ab3408b64c-kube-api-access-w88jq\") pod \"neutron-operator-controller-manager-585dbc889-x4k9t\" (UID: \"ada28b6b-5615-4bc1-ba2e-f1ab3408b64c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.246326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crccl\" (UniqueName: \"kubernetes.io/projected/df82c829-be95-477f-a566-2dec382d4598-kube-api-access-crccl\") pod \"manila-operator-controller-manager-7dd968899f-ggshz\" (UID: \"df82c829-be95-477f-a566-2dec382d4598\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.246379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24wsj\" (UniqueName: \"kubernetes.io/projected/3462ff04-ad8e-4f2d-872a-26bb98e59484-kube-api-access-24wsj\") pod \"nova-operator-controller-manager-55bff696bd-2t2tl\" (UID: \"3462ff04-ad8e-4f2d-872a-26bb98e59484\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.247007 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qfd5j" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.268830 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.272439 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q9k4\" (UniqueName: \"kubernetes.io/projected/c61df0fc-95b5-4b58-9801-75325f20e182-kube-api-access-2q9k4\") pod \"mariadb-operator-controller-manager-67bf948998-74gtz\" (UID: \"c61df0fc-95b5-4b58-9801-75325f20e182\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.277504 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crccl\" (UniqueName: \"kubernetes.io/projected/df82c829-be95-477f-a566-2dec382d4598-kube-api-access-crccl\") pod \"manila-operator-controller-manager-7dd968899f-ggshz\" (UID: \"df82c829-be95-477f-a566-2dec382d4598\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.295599 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.303519 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.306408 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.314573 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.327488 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.330271 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.331344 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.332341 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l5t4s" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.343373 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350588 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gn7n\" (UniqueName: \"kubernetes.io/projected/de3fc290-5f51-4b4e-845e-5f1020dd31bc-kube-api-access-9gn7n\") pod \"placement-operator-controller-manager-5b964cf4cd-kzmwp\" (UID: \"de3fc290-5f51-4b4e-845e-5f1020dd31bc\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350642 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-972bc\" (UniqueName: \"kubernetes.io/projected/30bcd237-b2b9-4c61-974b-bca62a288e84-kube-api-access-972bc\") pod \"swift-operator-controller-manager-68fc8c869-t88fd\" (UID: \"30bcd237-b2b9-4c61-974b-bca62a288e84\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350681 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w88jq\" (UniqueName: \"kubernetes.io/projected/ada28b6b-5615-4bc1-ba2e-f1ab3408b64c-kube-api-access-w88jq\") pod \"neutron-operator-controller-manager-585dbc889-x4k9t\" (UID: \"ada28b6b-5615-4bc1-ba2e-f1ab3408b64c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350702 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p82ct\" (UniqueName: \"kubernetes.io/projected/38062c77-be0b-4138-b77e-330e2dd20cc0-kube-api-access-p82ct\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350749 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlcr\" (UniqueName: \"kubernetes.io/projected/51559923-8ee0-41ae-b204-c7de34da4745-kube-api-access-wzlcr\") pod \"ovn-operator-controller-manager-788c46999f-rdtrn\" (UID: \"51559923-8ee0-41ae-b204-c7de34da4745\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350774 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350833 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24wsj\" (UniqueName: \"kubernetes.io/projected/3462ff04-ad8e-4f2d-872a-26bb98e59484-kube-api-access-24wsj\") pod \"nova-operator-controller-manager-55bff696bd-2t2tl\" (UID: \"3462ff04-ad8e-4f2d-872a-26bb98e59484\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.350862 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mz7j\" (UniqueName: \"kubernetes.io/projected/4427df73-583c-45ea-b592-0d282ac0b2d7-kube-api-access-9mz7j\") pod \"octavia-operator-controller-manager-6687f8d877-h6pgd\" (UID: \"4427df73-583c-45ea-b592-0d282ac0b2d7\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.377712 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24wsj\" (UniqueName: \"kubernetes.io/projected/3462ff04-ad8e-4f2d-872a-26bb98e59484-kube-api-access-24wsj\") pod \"nova-operator-controller-manager-55bff696bd-2t2tl\" (UID: \"3462ff04-ad8e-4f2d-872a-26bb98e59484\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.377914 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.382486 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mz7j\" (UniqueName: \"kubernetes.io/projected/4427df73-583c-45ea-b592-0d282ac0b2d7-kube-api-access-9mz7j\") pod \"octavia-operator-controller-manager-6687f8d877-h6pgd\" (UID: \"4427df73-583c-45ea-b592-0d282ac0b2d7\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.384602 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.386502 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w88jq\" (UniqueName: \"kubernetes.io/projected/ada28b6b-5615-4bc1-ba2e-f1ab3408b64c-kube-api-access-w88jq\") pod \"neutron-operator-controller-manager-585dbc889-x4k9t\" (UID: \"ada28b6b-5615-4bc1-ba2e-f1ab3408b64c\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.394168 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.395289 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.398886 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xfwwc" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.403636 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.426477 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-cdm9l"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.427602 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.429923 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-55smt" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.445340 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-cdm9l"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.454901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gn7n\" (UniqueName: \"kubernetes.io/projected/de3fc290-5f51-4b4e-845e-5f1020dd31bc-kube-api-access-9gn7n\") pod \"placement-operator-controller-manager-5b964cf4cd-kzmwp\" (UID: \"de3fc290-5f51-4b4e-845e-5f1020dd31bc\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.454937 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5vfv\" (UniqueName: \"kubernetes.io/projected/257b1191-ee6f-42f3-9894-78f1a43cfd3d-kube-api-access-z5vfv\") pod \"telemetry-operator-controller-manager-64b5b76f97-ls2m7\" (UID: \"257b1191-ee6f-42f3-9894-78f1a43cfd3d\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.454960 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-972bc\" (UniqueName: \"kubernetes.io/projected/30bcd237-b2b9-4c61-974b-bca62a288e84-kube-api-access-972bc\") pod \"swift-operator-controller-manager-68fc8c869-t88fd\" (UID: \"30bcd237-b2b9-4c61-974b-bca62a288e84\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.454984 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p82ct\" (UniqueName: \"kubernetes.io/projected/38062c77-be0b-4138-b77e-330e2dd20cc0-kube-api-access-p82ct\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.455018 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlcr\" (UniqueName: \"kubernetes.io/projected/51559923-8ee0-41ae-b204-c7de34da4745-kube-api-access-wzlcr\") pod \"ovn-operator-controller-manager-788c46999f-rdtrn\" (UID: \"51559923-8ee0-41ae-b204-c7de34da4745\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.455037 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.460367 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.460458 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert podName:38062c77-be0b-4138-b77e-330e2dd20cc0 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:28.96043673 +0000 UTC m=+1071.361021374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" (UID: "38062c77-be0b-4138-b77e-330e2dd20cc0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.476370 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.479955 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gn7n\" (UniqueName: \"kubernetes.io/projected/de3fc290-5f51-4b4e-845e-5f1020dd31bc-kube-api-access-9gn7n\") pod \"placement-operator-controller-manager-5b964cf4cd-kzmwp\" (UID: \"de3fc290-5f51-4b4e-845e-5f1020dd31bc\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.480916 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlcr\" (UniqueName: \"kubernetes.io/projected/51559923-8ee0-41ae-b204-c7de34da4745-kube-api-access-wzlcr\") pod \"ovn-operator-controller-manager-788c46999f-rdtrn\" (UID: \"51559923-8ee0-41ae-b204-c7de34da4745\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.489175 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.489955 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.495132 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p82ct\" (UniqueName: \"kubernetes.io/projected/38062c77-be0b-4138-b77e-330e2dd20cc0-kube-api-access-p82ct\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.495786 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4d284" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.495949 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.496046 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.499517 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.506145 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.514123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-972bc\" (UniqueName: \"kubernetes.io/projected/30bcd237-b2b9-4c61-974b-bca62a288e84-kube-api-access-972bc\") pod \"swift-operator-controller-manager-68fc8c869-t88fd\" (UID: \"30bcd237-b2b9-4c61-974b-bca62a288e84\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.521718 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.537494 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.556375 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5vfv\" (UniqueName: \"kubernetes.io/projected/257b1191-ee6f-42f3-9894-78f1a43cfd3d-kube-api-access-z5vfv\") pod \"telemetry-operator-controller-manager-64b5b76f97-ls2m7\" (UID: \"257b1191-ee6f-42f3-9894-78f1a43cfd3d\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.556460 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbd27\" (UniqueName: \"kubernetes.io/projected/07d0b8e0-e804-4007-a924-bdc50e4c1843-kube-api-access-hbd27\") pod \"test-operator-controller-manager-56f8bfcd9f-rmnff\" (UID: \"07d0b8e0-e804-4007-a924-bdc50e4c1843\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.556705 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.556938 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqvj\" (UniqueName: \"kubernetes.io/projected/d3b1117b-c064-4425-94ff-7d9b2ff94b8d-kube-api-access-smqvj\") pod \"watcher-operator-controller-manager-564965969-cdm9l\" (UID: \"d3b1117b-c064-4425-94ff-7d9b2ff94b8d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.578858 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5vfv\" (UniqueName: \"kubernetes.io/projected/257b1191-ee6f-42f3-9894-78f1a43cfd3d-kube-api-access-z5vfv\") pod \"telemetry-operator-controller-manager-64b5b76f97-ls2m7\" (UID: \"257b1191-ee6f-42f3-9894-78f1a43cfd3d\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.590468 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.597050 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.598272 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.605663 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8mfdg" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.624214 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.659101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.659141 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.659275 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6gx\" (UniqueName: \"kubernetes.io/projected/4157a634-ad19-42ac-9ef4-7249fe50798f-kube-api-access-pn6gx\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.659317 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqvj\" (UniqueName: \"kubernetes.io/projected/d3b1117b-c064-4425-94ff-7d9b2ff94b8d-kube-api-access-smqvj\") pod \"watcher-operator-controller-manager-564965969-cdm9l\" (UID: \"d3b1117b-c064-4425-94ff-7d9b2ff94b8d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.659496 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.659537 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbd27\" (UniqueName: \"kubernetes.io/projected/07d0b8e0-e804-4007-a924-bdc50e4c1843-kube-api-access-hbd27\") pod \"test-operator-controller-manager-56f8bfcd9f-rmnff\" (UID: \"07d0b8e0-e804-4007-a924-bdc50e4c1843\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.671543 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.671646 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert podName:26375595-f5e1-4568-ac8b-8db08398d97a nodeName:}" failed. No retries permitted until 2026-01-29 16:52:29.671612218 +0000 UTC m=+1072.072196862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert") pod "infra-operator-controller-manager-79955696d6-q94c9" (UID: "26375595-f5e1-4568-ac8b-8db08398d97a") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.672834 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.687829 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.700298 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-95h56"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.711319 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p"] Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.712258 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqvj\" (UniqueName: \"kubernetes.io/projected/d3b1117b-c064-4425-94ff-7d9b2ff94b8d-kube-api-access-smqvj\") pod \"watcher-operator-controller-manager-564965969-cdm9l\" (UID: \"d3b1117b-c064-4425-94ff-7d9b2ff94b8d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.715479 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbd27\" (UniqueName: \"kubernetes.io/projected/07d0b8e0-e804-4007-a924-bdc50e4c1843-kube-api-access-hbd27\") pod \"test-operator-controller-manager-56f8bfcd9f-rmnff\" (UID: \"07d0b8e0-e804-4007-a924-bdc50e4c1843\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.758417 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.760852 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2j2\" (UniqueName: \"kubernetes.io/projected/8f22c3e7-ca4b-471c-8c28-6d817f25582d-kube-api-access-qj2j2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lw9ng\" (UID: \"8f22c3e7-ca4b-471c-8c28-6d817f25582d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.760929 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.760958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.761003 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6gx\" (UniqueName: \"kubernetes.io/projected/4157a634-ad19-42ac-9ef4-7249fe50798f-kube-api-access-pn6gx\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.761455 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.761495 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:29.261481404 +0000 UTC m=+1071.662066048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "metrics-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.762140 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.762170 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:29.262162751 +0000 UTC m=+1071.662747395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.789954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6gx\" (UniqueName: \"kubernetes.io/projected/4157a634-ad19-42ac-9ef4-7249fe50798f-kube-api-access-pn6gx\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:28 crc kubenswrapper[4746]: W0129 16:52:28.822348 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf582f0_a5ab_4ec3_8171_d7800c624ef9.slice/crio-443d56bddaa9f0f3c7c6ad0a621cf073631e1a3e13a99a9547c890e8d36f817d WatchSource:0}: Error finding container 443d56bddaa9f0f3c7c6ad0a621cf073631e1a3e13a99a9547c890e8d36f817d: Status 404 returned error can't find the container with id 443d56bddaa9f0f3c7c6ad0a621cf073631e1a3e13a99a9547c890e8d36f817d Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.865092 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2j2\" (UniqueName: \"kubernetes.io/projected/8f22c3e7-ca4b-471c-8c28-6d817f25582d-kube-api-access-qj2j2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lw9ng\" (UID: \"8f22c3e7-ca4b-471c-8c28-6d817f25582d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.884954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2j2\" (UniqueName: \"kubernetes.io/projected/8f22c3e7-ca4b-471c-8c28-6d817f25582d-kube-api-access-qj2j2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lw9ng\" (UID: \"8f22c3e7-ca4b-471c-8c28-6d817f25582d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.957747 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.968421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.968683 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: E0129 16:52:28.968762 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert podName:38062c77-be0b-4138-b77e-330e2dd20cc0 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:29.968744427 +0000 UTC m=+1072.369329071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" (UID: "38062c77-be0b-4138-b77e-330e2dd20cc0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:28 crc kubenswrapper[4746]: I0129 16:52:28.978478 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.005083 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.023359 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.045561 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd135c2c_9e2d_434e_82b0_8f5a8bbd0123.slice/crio-3cf8c2734483c261ea0c0c9c1fcc8f68fbce3c8599eb910e120aea45decc3152 WatchSource:0}: Error finding container 3cf8c2734483c261ea0c0c9c1fcc8f68fbce3c8599eb910e120aea45decc3152: Status 404 returned error can't find the container with id 3cf8c2734483c261ea0c0c9c1fcc8f68fbce3c8599eb910e120aea45decc3152 Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.063431 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7629581f_7c9b_4b2a_9296_3afc1abca26c.slice/crio-1343e7cde1da70e35160c3b7553d8d51862b4e2387a035483164d337b867f808 WatchSource:0}: Error finding container 1343e7cde1da70e35160c3b7553d8d51862b4e2387a035483164d337b867f808: Status 404 returned error can't find the container with id 1343e7cde1da70e35160c3b7553d8d51862b4e2387a035483164d337b867f808 Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.092339 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.101141 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.233442 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.256624 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.279617 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.279656 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.279823 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.279873 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:30.279858379 +0000 UTC m=+1072.680443023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "webhook-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.280345 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.280428 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:30.280404553 +0000 UTC m=+1072.680989267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "metrics-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.312657 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc61df0fc_95b5_4b58_9801_75325f20e182.slice/crio-979b6a27ea98f6371a80c08078219f164294d871028fb2c4c29064ee749685c6 WatchSource:0}: Error finding container 979b6a27ea98f6371a80c08078219f164294d871028fb2c4c29064ee749685c6: Status 404 returned error can't find the container with id 979b6a27ea98f6371a80c08078219f164294d871028fb2c4c29064ee749685c6 Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.513461 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.522355 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.527856 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd"] Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.531310 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3fc290_5f51_4b4e_845e_5f1020dd31bc.slice/crio-81b257d8aaa85893a370aab044a65d457142cc153d4e21a10fde0da8284d13c9 WatchSource:0}: Error finding container 81b257d8aaa85893a370aab044a65d457142cc153d4e21a10fde0da8284d13c9: Status 404 returned error can't find the container with id 81b257d8aaa85893a370aab044a65d457142cc153d4e21a10fde0da8284d13c9 Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.533354 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" event={"ID":"c61df0fc-95b5-4b58-9801-75325f20e182","Type":"ContainerStarted","Data":"979b6a27ea98f6371a80c08078219f164294d871028fb2c4c29064ee749685c6"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.544955 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" event={"ID":"d72b4019-1caf-4f5e-8324-790ff6d0c4b1","Type":"ContainerStarted","Data":"5d01b4073adb437f19b90d56489bb9d854d2d223778903ea6f35faf218043717"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.547648 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" event={"ID":"7629581f-7c9b-4b2a-9296-3afc1abca26c","Type":"ContainerStarted","Data":"1343e7cde1da70e35160c3b7553d8d51862b4e2387a035483164d337b867f808"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.548913 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" event={"ID":"50aedec4-5794-4aac-ae6d-32c393128b8b","Type":"ContainerStarted","Data":"e930e3ddc263b2e19103b4ede8d13f3f2f1e1c33e8e47d1390408e17856a8cc8"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.550402 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" event={"ID":"cc3d7c3e-3d38-43f3-92ce-e4696ed6e776","Type":"ContainerStarted","Data":"305562ed6e78eb2e9f78c203ec716cbcc0f55964d253519db4fbcd8fbf5ae6f2"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.551756 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" event={"ID":"dd135c2c-9e2d-434e-82b0-8f5a8bbd0123","Type":"ContainerStarted","Data":"3cf8c2734483c261ea0c0c9c1fcc8f68fbce3c8599eb910e120aea45decc3152"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.552991 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" event={"ID":"eaf582f0-a5ab-4ec3-8171-d7800c624ef9","Type":"ContainerStarted","Data":"443d56bddaa9f0f3c7c6ad0a621cf073631e1a3e13a99a9547c890e8d36f817d"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.554251 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" event={"ID":"df82c829-be95-477f-a566-2dec382d4598","Type":"ContainerStarted","Data":"6070304e93b3d2e4dbdd02c2edd7934571fe3a2c43c7ac245e0f63880de82160"} Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.629343 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.641593 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.654356 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl"] Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.654684 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51559923_8ee0_41ae_b204_c7de34da4745.slice/crio-d877423c92ac862c4413733ffa2afe7931a8327abfd1ef9b33923a07d65aec31 WatchSource:0}: Error finding container d877423c92ac862c4413733ffa2afe7931a8327abfd1ef9b33923a07d65aec31: Status 404 returned error can't find the container with id d877423c92ac862c4413733ffa2afe7931a8327abfd1ef9b33923a07d65aec31 Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.662424 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.668177 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd"] Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.678637 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w88jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-x4k9t_openstack-operators(ada28b6b-5615-4bc1-ba2e-f1ab3408b64c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.679311 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-972bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-t88fd_openstack-operators(30bcd237-b2b9-4c61-974b-bca62a288e84): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.679980 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" podUID="ada28b6b-5615-4bc1-ba2e-f1ab3408b64c" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.680439 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" podUID="30bcd237-b2b9-4c61-974b-bca62a288e84" Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.687820 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.687953 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.688004 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert podName:26375595-f5e1-4568-ac8b-8db08398d97a nodeName:}" failed. No retries permitted until 2026-01-29 16:52:31.687990105 +0000 UTC m=+1074.088574749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert") pod "infra-operator-controller-manager-79955696d6-q94c9" (UID: "26375595-f5e1-4568-ac8b-8db08398d97a") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.764174 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7"] Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.772637 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng"] Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.773429 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257b1191_ee6f_42f3_9894_78f1a43cfd3d.slice/crio-687d010c53aac3cd202a5b9a36b48c7bce850c70bc1a47a759d0d9505fa19ebf WatchSource:0}: Error finding container 687d010c53aac3cd202a5b9a36b48c7bce850c70bc1a47a759d0d9505fa19ebf: Status 404 returned error can't find the container with id 687d010c53aac3cd202a5b9a36b48c7bce850c70bc1a47a759d0d9505fa19ebf Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.776364 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5vfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-ls2m7_openstack-operators(257b1191-ee6f-42f3-9894-78f1a43cfd3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.777039 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3b1117b_c064_4425_94ff_7d9b2ff94b8d.slice/crio-a2feff2b3c33f572d08c4aa99744290f92a96962563517186e2777f47971c1ec WatchSource:0}: Error finding container a2feff2b3c33f572d08c4aa99744290f92a96962563517186e2777f47971c1ec: Status 404 returned error can't find the container with id a2feff2b3c33f572d08c4aa99744290f92a96962563517186e2777f47971c1ec Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.777602 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" podUID="257b1191-ee6f-42f3-9894-78f1a43cfd3d" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.779207 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smqvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-cdm9l_openstack-operators(d3b1117b-c064-4425-94ff-7d9b2ff94b8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.779961 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f22c3e7_ca4b_471c_8c28_6d817f25582d.slice/crio-8e88bb2d4926bc73431b8f3e57757bab4d54aab2cca0e725bf219a21ca1d10a2 WatchSource:0}: Error finding container 8e88bb2d4926bc73431b8f3e57757bab4d54aab2cca0e725bf219a21ca1d10a2: Status 404 returned error can't find the container with id 8e88bb2d4926bc73431b8f3e57757bab4d54aab2cca0e725bf219a21ca1d10a2 Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.780674 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" podUID="d3b1117b-c064-4425-94ff-7d9b2ff94b8d" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.782452 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj2j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lw9ng_openstack-operators(8f22c3e7-ca4b-471c-8c28-6d817f25582d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.783213 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-cdm9l"] Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.783938 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" podUID="8f22c3e7-ca4b-471c-8c28-6d817f25582d" Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.851656 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff"] Jan 29 16:52:29 crc kubenswrapper[4746]: W0129 16:52:29.856266 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d0b8e0_e804_4007_a924_bdc50e4c1843.slice/crio-bdbee505899162a141fc865583a6d24c5f23edcebfa894fadf68a665ca3744d0 WatchSource:0}: Error finding container bdbee505899162a141fc865583a6d24c5f23edcebfa894fadf68a665ca3744d0: Status 404 returned error can't find the container with id bdbee505899162a141fc865583a6d24c5f23edcebfa894fadf68a665ca3744d0 Jan 29 16:52:29 crc kubenswrapper[4746]: I0129 16:52:29.992102 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.992291 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:29 crc kubenswrapper[4746]: E0129 16:52:29.992580 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert podName:38062c77-be0b-4138-b77e-330e2dd20cc0 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:31.992563912 +0000 UTC m=+1074.393148556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" (UID: "38062c77-be0b-4138-b77e-330e2dd20cc0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.297801 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.297863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.298021 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.298109 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:32.298091135 +0000 UTC m=+1074.698675769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "webhook-server-cert" not found Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.298472 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.298518 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:32.298509646 +0000 UTC m=+1074.699094290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "metrics-server-cert" not found Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.564853 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" event={"ID":"ada28b6b-5615-4bc1-ba2e-f1ab3408b64c","Type":"ContainerStarted","Data":"14be4891536c3b7d3e86aee0b8336a709789f7d987a040f5343f5a3f4aef0424"} Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.568096 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" podUID="ada28b6b-5615-4bc1-ba2e-f1ab3408b64c" Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.573144 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" event={"ID":"de3fc290-5f51-4b4e-845e-5f1020dd31bc","Type":"ContainerStarted","Data":"81b257d8aaa85893a370aab044a65d457142cc153d4e21a10fde0da8284d13c9"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.588617 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" event={"ID":"257b1191-ee6f-42f3-9894-78f1a43cfd3d","Type":"ContainerStarted","Data":"687d010c53aac3cd202a5b9a36b48c7bce850c70bc1a47a759d0d9505fa19ebf"} Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.591429 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" podUID="257b1191-ee6f-42f3-9894-78f1a43cfd3d" Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.594092 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" event={"ID":"51559923-8ee0-41ae-b204-c7de34da4745","Type":"ContainerStarted","Data":"d877423c92ac862c4413733ffa2afe7931a8327abfd1ef9b33923a07d65aec31"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.599338 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" event={"ID":"4427df73-583c-45ea-b592-0d282ac0b2d7","Type":"ContainerStarted","Data":"09db1ca8b4c0e674bb04f7280194b7ebe0b56c6911c09f620430b12497d9342f"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.606711 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" event={"ID":"d3b1117b-c064-4425-94ff-7d9b2ff94b8d","Type":"ContainerStarted","Data":"a2feff2b3c33f572d08c4aa99744290f92a96962563517186e2777f47971c1ec"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.612952 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" event={"ID":"30bcd237-b2b9-4c61-974b-bca62a288e84","Type":"ContainerStarted","Data":"e8584fba868833ad0bcb5b996bbf4bc7715575edcd45b16c49330fa50c0797b3"} Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.613681 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" podUID="d3b1117b-c064-4425-94ff-7d9b2ff94b8d" Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.614796 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" podUID="30bcd237-b2b9-4c61-974b-bca62a288e84" Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.624820 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" event={"ID":"8d781faa-902f-41cf-ab3a-ad07d2322345","Type":"ContainerStarted","Data":"e5f619575b507d2bb030e8dfdcd939bcd9053ad52171f8a58dfbd0ebe5e35f7f"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.635826 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" event={"ID":"3462ff04-ad8e-4f2d-872a-26bb98e59484","Type":"ContainerStarted","Data":"a9adb95d769cd6820caee2bb931b8f8c204f3f998e804b716fa189af154a0903"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.640107 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" event={"ID":"07d0b8e0-e804-4007-a924-bdc50e4c1843","Type":"ContainerStarted","Data":"bdbee505899162a141fc865583a6d24c5f23edcebfa894fadf68a665ca3744d0"} Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.664833 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" event={"ID":"8f22c3e7-ca4b-471c-8c28-6d817f25582d","Type":"ContainerStarted","Data":"8e88bb2d4926bc73431b8f3e57757bab4d54aab2cca0e725bf219a21ca1d10a2"} Jan 29 16:52:30 crc kubenswrapper[4746]: E0129 16:52:30.669629 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" podUID="8f22c3e7-ca4b-471c-8c28-6d817f25582d" Jan 29 16:52:30 crc kubenswrapper[4746]: I0129 16:52:30.674082 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" event={"ID":"40e94942-ca30-4f19-b3bb-0dd32a419bb4","Type":"ContainerStarted","Data":"71edc534b7c9511599ac6cecd9bd797732e21f510e8da7fa0cb80ad4b8cac8f9"} Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.687010 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" podUID="8f22c3e7-ca4b-471c-8c28-6d817f25582d" Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.687077 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" podUID="257b1191-ee6f-42f3-9894-78f1a43cfd3d" Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.687154 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" podUID="ada28b6b-5615-4bc1-ba2e-f1ab3408b64c" Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.687234 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" podUID="30bcd237-b2b9-4c61-974b-bca62a288e84" Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.687278 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" podUID="d3b1117b-c064-4425-94ff-7d9b2ff94b8d" Jan 29 16:52:31 crc kubenswrapper[4746]: I0129 16:52:31.725997 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.726179 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:31 crc kubenswrapper[4746]: E0129 16:52:31.726243 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert podName:26375595-f5e1-4568-ac8b-8db08398d97a nodeName:}" failed. No retries permitted until 2026-01-29 16:52:35.726227495 +0000 UTC m=+1078.126812139 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert") pod "infra-operator-controller-manager-79955696d6-q94c9" (UID: "26375595-f5e1-4568-ac8b-8db08398d97a") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:32 crc kubenswrapper[4746]: I0129 16:52:32.031441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:32 crc kubenswrapper[4746]: E0129 16:52:32.031627 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:32 crc kubenswrapper[4746]: E0129 16:52:32.031707 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert podName:38062c77-be0b-4138-b77e-330e2dd20cc0 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:36.031687456 +0000 UTC m=+1078.432272100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" (UID: "38062c77-be0b-4138-b77e-330e2dd20cc0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:32 crc kubenswrapper[4746]: I0129 16:52:32.335002 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:32 crc kubenswrapper[4746]: I0129 16:52:32.335285 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:32 crc kubenswrapper[4746]: E0129 16:52:32.335472 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:52:32 crc kubenswrapper[4746]: E0129 16:52:32.335477 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:52:32 crc kubenswrapper[4746]: E0129 16:52:32.335533 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:36.335517133 +0000 UTC m=+1078.736101777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "webhook-server-cert" not found Jan 29 16:52:32 crc kubenswrapper[4746]: E0129 16:52:32.335563 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:36.335540704 +0000 UTC m=+1078.736125348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "metrics-server-cert" not found Jan 29 16:52:35 crc kubenswrapper[4746]: I0129 16:52:35.783980 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:35 crc kubenswrapper[4746]: E0129 16:52:35.784180 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:35 crc kubenswrapper[4746]: E0129 16:52:35.784380 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert podName:26375595-f5e1-4568-ac8b-8db08398d97a nodeName:}" failed. No retries permitted until 2026-01-29 16:52:43.784362026 +0000 UTC m=+1086.184946670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert") pod "infra-operator-controller-manager-79955696d6-q94c9" (UID: "26375595-f5e1-4568-ac8b-8db08398d97a") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:52:36 crc kubenswrapper[4746]: I0129 16:52:36.088473 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:36 crc kubenswrapper[4746]: E0129 16:52:36.088965 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:36 crc kubenswrapper[4746]: E0129 16:52:36.089027 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert podName:38062c77-be0b-4138-b77e-330e2dd20cc0 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:44.089011045 +0000 UTC m=+1086.489595689 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" (UID: "38062c77-be0b-4138-b77e-330e2dd20cc0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:36 crc kubenswrapper[4746]: I0129 16:52:36.393600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:36 crc kubenswrapper[4746]: I0129 16:52:36.393641 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:36 crc kubenswrapper[4746]: E0129 16:52:36.393799 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:52:36 crc kubenswrapper[4746]: E0129 16:52:36.393859 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:44.393835389 +0000 UTC m=+1086.794420033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "webhook-server-cert" not found Jan 29 16:52:36 crc kubenswrapper[4746]: E0129 16:52:36.393797 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:52:36 crc kubenswrapper[4746]: E0129 16:52:36.393951 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:52:44.393923361 +0000 UTC m=+1086.794508005 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "metrics-server-cert" not found Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.753649 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" event={"ID":"df82c829-be95-477f-a566-2dec382d4598","Type":"ContainerStarted","Data":"5e8b4edd711426361f74c57116012513ab723ee8836d558989f623b479ce9413"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.754325 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.755029 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" event={"ID":"c61df0fc-95b5-4b58-9801-75325f20e182","Type":"ContainerStarted","Data":"a0fa25411484b14b64b7f3f60973060056758c7f7e371e25ccc3c0ca1d2ebe52"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.755519 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.756642 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" event={"ID":"8d781faa-902f-41cf-ab3a-ad07d2322345","Type":"ContainerStarted","Data":"9192296f3a2830fa6241f58c42614a2cb3eada00713ed72f12188c88171bdbd9"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.756795 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.758950 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" event={"ID":"51559923-8ee0-41ae-b204-c7de34da4745","Type":"ContainerStarted","Data":"8938e9120def0a7e7acba68c72632e96ea2eff6430e1421ddd6a02d2228ec324"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.759096 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.761037 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" event={"ID":"4427df73-583c-45ea-b592-0d282ac0b2d7","Type":"ContainerStarted","Data":"ebdebdb6a39780358a4526c830f85b88edfa1316e328c06af375777ae51a4e17"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.761553 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.763213 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" event={"ID":"7629581f-7c9b-4b2a-9296-3afc1abca26c","Type":"ContainerStarted","Data":"8cb01eea6177385f37f8c5a20ce54edd3ded3394a5e9b71e47259734effbbea5"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.763577 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.764915 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" event={"ID":"dd135c2c-9e2d-434e-82b0-8f5a8bbd0123","Type":"ContainerStarted","Data":"003a030866142fa4530a4106f20ecbcd05c5a6a2b43a43c0ccceb55b526669e8"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.765359 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.766761 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" event={"ID":"cc3d7c3e-3d38-43f3-92ce-e4696ed6e776","Type":"ContainerStarted","Data":"108c85e7fd13b169a99395dd1beafe62b4feefe9550bf1df2f4f3238a995aab9"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.767082 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.768345 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" event={"ID":"3462ff04-ad8e-4f2d-872a-26bb98e59484","Type":"ContainerStarted","Data":"a0cd20cedb3bd31b6766a9e2ef30b35734439537b41403e20fec73883e467612"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.768775 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.769908 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" event={"ID":"50aedec4-5794-4aac-ae6d-32c393128b8b","Type":"ContainerStarted","Data":"68b664cef0726a49e32eb2b5ca2968ac8c9135ecb1f810a3debfce1645391c96"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.770297 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.771383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" event={"ID":"de3fc290-5f51-4b4e-845e-5f1020dd31bc","Type":"ContainerStarted","Data":"f72c43a5338ec185eee17c30ebfb4043daefdc6a16cef9e98a403dee6c7a1e26"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.771759 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.773000 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" event={"ID":"d72b4019-1caf-4f5e-8324-790ff6d0c4b1","Type":"ContainerStarted","Data":"0d01268b44ab772cfcd13994d53b3de4d46b30ef8c4500ab92d7e70f36285669"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.773143 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.774374 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" event={"ID":"eaf582f0-a5ab-4ec3-8171-d7800c624ef9","Type":"ContainerStarted","Data":"4407a149145d790f3ea2a36408e7467814c1a6c6d61c5391809606da487f958f"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.774434 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.775710 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" event={"ID":"07d0b8e0-e804-4007-a924-bdc50e4c1843","Type":"ContainerStarted","Data":"5d0ed96a584244c90a16159c9fe42ca46cc055eb4a6cfc0f699dd5bede2c9264"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.775788 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.776977 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" event={"ID":"40e94942-ca30-4f19-b3bb-0dd32a419bb4","Type":"ContainerStarted","Data":"34c0cd231b1946dee9d00a51048bd103f9143680875a46cf22fd5daf324fb2a7"} Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.777353 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.781860 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" podStartSLOduration=3.377661605 podStartE2EDuration="15.781835852s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.134611907 +0000 UTC m=+1071.535196551" lastFinishedPulling="2026-01-29 16:52:41.538786154 +0000 UTC m=+1083.939370798" observedRunningTime="2026-01-29 16:52:42.779284304 +0000 UTC m=+1085.179868948" watchObservedRunningTime="2026-01-29 16:52:42.781835852 +0000 UTC m=+1085.182420496" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.841103 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" podStartSLOduration=3.152430292 podStartE2EDuration="15.841087791s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.315332573 +0000 UTC m=+1071.715917217" lastFinishedPulling="2026-01-29 16:52:42.003990072 +0000 UTC m=+1084.404574716" observedRunningTime="2026-01-29 16:52:42.813424654 +0000 UTC m=+1085.214009298" watchObservedRunningTime="2026-01-29 16:52:42.841087791 +0000 UTC m=+1085.241672435" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.841316 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" podStartSLOduration=2.66748733 podStartE2EDuration="14.841311747s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.85965003 +0000 UTC m=+1072.260234674" lastFinishedPulling="2026-01-29 16:52:42.033474447 +0000 UTC m=+1084.434059091" observedRunningTime="2026-01-29 16:52:42.837234088 +0000 UTC m=+1085.237818732" watchObservedRunningTime="2026-01-29 16:52:42.841311747 +0000 UTC m=+1085.241896391" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.866487 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" podStartSLOduration=2.402359984 podStartE2EDuration="14.866468337s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.54293636 +0000 UTC m=+1071.943521004" lastFinishedPulling="2026-01-29 16:52:42.007044713 +0000 UTC m=+1084.407629357" observedRunningTime="2026-01-29 16:52:42.860012255 +0000 UTC m=+1085.260596899" watchObservedRunningTime="2026-01-29 16:52:42.866468337 +0000 UTC m=+1085.267052981" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.885867 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" podStartSLOduration=3.426426275 podStartE2EDuration="15.885851934s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.544585823 +0000 UTC m=+1071.945170477" lastFinishedPulling="2026-01-29 16:52:42.004011492 +0000 UTC m=+1084.404596136" observedRunningTime="2026-01-29 16:52:42.884339403 +0000 UTC m=+1085.284924037" watchObservedRunningTime="2026-01-29 16:52:42.885851934 +0000 UTC m=+1085.286436578" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.899666 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" podStartSLOduration=3.6165319719999998 podStartE2EDuration="15.899651782s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.645897173 +0000 UTC m=+1072.046481817" lastFinishedPulling="2026-01-29 16:52:41.929016973 +0000 UTC m=+1084.329601627" observedRunningTime="2026-01-29 16:52:42.89696117 +0000 UTC m=+1085.297545814" watchObservedRunningTime="2026-01-29 16:52:42.899651782 +0000 UTC m=+1085.300236426" Jan 29 16:52:42 crc kubenswrapper[4746]: I0129 16:52:42.998800 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" podStartSLOduration=3.641781164 podStartE2EDuration="15.998766053s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.647955438 +0000 UTC m=+1072.048540082" lastFinishedPulling="2026-01-29 16:52:42.004940327 +0000 UTC m=+1084.405524971" observedRunningTime="2026-01-29 16:52:42.949702235 +0000 UTC m=+1085.350286869" watchObservedRunningTime="2026-01-29 16:52:42.998766053 +0000 UTC m=+1085.399350697" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.041640 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" podStartSLOduration=3.362343107 podStartE2EDuration="16.041625875s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:28.785435021 +0000 UTC m=+1071.186019665" lastFinishedPulling="2026-01-29 16:52:41.464717789 +0000 UTC m=+1083.865302433" observedRunningTime="2026-01-29 16:52:43.040462184 +0000 UTC m=+1085.441046828" watchObservedRunningTime="2026-01-29 16:52:43.041625875 +0000 UTC m=+1085.442210509" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.065341 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" podStartSLOduration=2.7226791500000003 podStartE2EDuration="15.065318567s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.661461788 +0000 UTC m=+1072.062046432" lastFinishedPulling="2026-01-29 16:52:42.004101205 +0000 UTC m=+1084.404685849" observedRunningTime="2026-01-29 16:52:42.999753859 +0000 UTC m=+1085.400338503" watchObservedRunningTime="2026-01-29 16:52:43.065318567 +0000 UTC m=+1085.465903211" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.094076 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" podStartSLOduration=3.451472472 podStartE2EDuration="16.094057962s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:28.829703341 +0000 UTC m=+1071.230287985" lastFinishedPulling="2026-01-29 16:52:41.472288831 +0000 UTC m=+1083.872873475" observedRunningTime="2026-01-29 16:52:43.092953433 +0000 UTC m=+1085.493538077" watchObservedRunningTime="2026-01-29 16:52:43.094057962 +0000 UTC m=+1085.494642606" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.123120 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" podStartSLOduration=3.220417724 podStartE2EDuration="16.123101776s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.104352481 +0000 UTC m=+1071.504937125" lastFinishedPulling="2026-01-29 16:52:42.007036533 +0000 UTC m=+1084.407621177" observedRunningTime="2026-01-29 16:52:43.121893024 +0000 UTC m=+1085.522477658" watchObservedRunningTime="2026-01-29 16:52:43.123101776 +0000 UTC m=+1085.523686420" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.186660 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" podStartSLOduration=3.840060858 podStartE2EDuration="16.18664342s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.544321476 +0000 UTC m=+1071.944906120" lastFinishedPulling="2026-01-29 16:52:41.890904038 +0000 UTC m=+1084.291488682" observedRunningTime="2026-01-29 16:52:43.17161664 +0000 UTC m=+1085.572201284" watchObservedRunningTime="2026-01-29 16:52:43.18664342 +0000 UTC m=+1085.587228064" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.221374 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" podStartSLOduration=3.339998991 podStartE2EDuration="16.221353074s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.04802139 +0000 UTC m=+1071.448606034" lastFinishedPulling="2026-01-29 16:52:41.929375473 +0000 UTC m=+1084.329960117" observedRunningTime="2026-01-29 16:52:43.215118549 +0000 UTC m=+1085.615703193" watchObservedRunningTime="2026-01-29 16:52:43.221353074 +0000 UTC m=+1085.621937718" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.266947 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" podStartSLOduration=3.5195445579999998 podStartE2EDuration="16.26693198s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.257838542 +0000 UTC m=+1071.658423186" lastFinishedPulling="2026-01-29 16:52:42.005225954 +0000 UTC m=+1084.405810608" observedRunningTime="2026-01-29 16:52:43.261333041 +0000 UTC m=+1085.661917685" watchObservedRunningTime="2026-01-29 16:52:43.26693198 +0000 UTC m=+1085.667516624" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.285902 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" podStartSLOduration=3.351137909 podStartE2EDuration="16.285871324s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.072286087 +0000 UTC m=+1071.472870721" lastFinishedPulling="2026-01-29 16:52:42.007019492 +0000 UTC m=+1084.407604136" observedRunningTime="2026-01-29 16:52:43.284834297 +0000 UTC m=+1085.685418941" watchObservedRunningTime="2026-01-29 16:52:43.285871324 +0000 UTC m=+1085.686455968" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.816529 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:43 crc kubenswrapper[4746]: I0129 16:52:43.825358 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26375595-f5e1-4568-ac8b-8db08398d97a-cert\") pod \"infra-operator-controller-manager-79955696d6-q94c9\" (UID: \"26375595-f5e1-4568-ac8b-8db08398d97a\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.103131 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vxc6l" Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.111661 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.119916 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:52:44 crc kubenswrapper[4746]: E0129 16:52:44.120091 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:44 crc kubenswrapper[4746]: E0129 16:52:44.120181 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert podName:38062c77-be0b-4138-b77e-330e2dd20cc0 nodeName:}" failed. No retries permitted until 2026-01-29 16:53:00.120166078 +0000 UTC m=+1102.520750722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" (UID: "38062c77-be0b-4138-b77e-330e2dd20cc0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.423581 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.423904 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:52:44 crc kubenswrapper[4746]: E0129 16:52:44.423741 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:52:44 crc kubenswrapper[4746]: E0129 16:52:44.424072 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:52:44 crc kubenswrapper[4746]: E0129 16:52:44.424133 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:53:00.423999216 +0000 UTC m=+1102.824583910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "metrics-server-cert" not found Jan 29 16:52:44 crc kubenswrapper[4746]: E0129 16:52:44.424222 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs podName:4157a634-ad19-42ac-9ef4-7249fe50798f nodeName:}" failed. No retries permitted until 2026-01-29 16:53:00.424206122 +0000 UTC m=+1102.824790766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-fffl2" (UID: "4157a634-ad19-42ac-9ef4-7249fe50798f") : secret "webhook-server-cert" not found Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.461330 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-q94c9"] Jan 29 16:52:44 crc kubenswrapper[4746]: W0129 16:52:44.495950 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26375595_f5e1_4568_ac8b_8db08398d97a.slice/crio-5931a484e050fe0bc8dddbd49b11f52fcef01feaa0cb883757b1775afb01345c WatchSource:0}: Error finding container 5931a484e050fe0bc8dddbd49b11f52fcef01feaa0cb883757b1775afb01345c: Status 404 returned error can't find the container with id 5931a484e050fe0bc8dddbd49b11f52fcef01feaa0cb883757b1775afb01345c Jan 29 16:52:44 crc kubenswrapper[4746]: I0129 16:52:44.793700 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" event={"ID":"26375595-f5e1-4568-ac8b-8db08398d97a","Type":"ContainerStarted","Data":"5931a484e050fe0bc8dddbd49b11f52fcef01feaa0cb883757b1775afb01345c"} Jan 29 16:52:45 crc kubenswrapper[4746]: I0129 16:52:45.800826 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" event={"ID":"ada28b6b-5615-4bc1-ba2e-f1ab3408b64c","Type":"ContainerStarted","Data":"8aba073fcf3892800be614dc74472d95322b552216386f5d450c8c3510da4721"} Jan 29 16:52:45 crc kubenswrapper[4746]: I0129 16:52:45.801378 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:45 crc kubenswrapper[4746]: I0129 16:52:45.829515 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" podStartSLOduration=3.213421077 podStartE2EDuration="18.829488882s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.678480342 +0000 UTC m=+1072.079064986" lastFinishedPulling="2026-01-29 16:52:45.294548147 +0000 UTC m=+1087.695132791" observedRunningTime="2026-01-29 16:52:45.817038471 +0000 UTC m=+1088.217623115" watchObservedRunningTime="2026-01-29 16:52:45.829488882 +0000 UTC m=+1088.230073526" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.114800 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kgx8p" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.138693 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-95h56" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.213486 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-9r8qw" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.300592 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-xqxxr" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.310222 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l886h" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.321051 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-ggshz" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.346834 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-74gtz" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.388313 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-52kgv" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.388434 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-2bww8" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.480842 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lzcpg" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.514253 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2t2tl" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.529478 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h6pgd" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.551532 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-rdtrn" Jan 29 16:52:48 crc kubenswrapper[4746]: I0129 16:52:48.571411 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kzmwp" Jan 29 16:52:49 crc kubenswrapper[4746]: I0129 16:52:49.026664 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-rmnff" Jan 29 16:52:49 crc kubenswrapper[4746]: I0129 16:52:49.064753 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:52:49 crc kubenswrapper[4746]: I0129 16:52:49.064801 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.839737 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" event={"ID":"30bcd237-b2b9-4c61-974b-bca62a288e84","Type":"ContainerStarted","Data":"64c1088abb4f60f193ac7b4876f618fe8619b35537b9a8d4723cc8b906ca3971"} Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.840362 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.850140 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" event={"ID":"26375595-f5e1-4568-ac8b-8db08398d97a","Type":"ContainerStarted","Data":"3866ffff8a509340c97041853750bd51e3975127e5a01715ecb603dc44bc3435"} Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.850298 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.852209 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" event={"ID":"257b1191-ee6f-42f3-9894-78f1a43cfd3d","Type":"ContainerStarted","Data":"211e063a1e5f4eaf280854de16c6c58f9286788117f200fbb7d25b0b4669082b"} Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.852376 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.853283 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" event={"ID":"d3b1117b-c064-4425-94ff-7d9b2ff94b8d","Type":"ContainerStarted","Data":"afe487bf303c4bc246f4732d5d45bd84c0cbef6afa51470f9a7eeb697b85b41d"} Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.853503 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.868002 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" podStartSLOduration=2.695731092 podStartE2EDuration="22.867984611s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.679149389 +0000 UTC m=+1072.079734033" lastFinishedPulling="2026-01-29 16:52:49.851402908 +0000 UTC m=+1092.251987552" observedRunningTime="2026-01-29 16:52:50.86008549 +0000 UTC m=+1093.260670134" watchObservedRunningTime="2026-01-29 16:52:50.867984611 +0000 UTC m=+1093.268569255" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.876176 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" podStartSLOduration=2.631743606 podStartE2EDuration="22.876157168s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.779072693 +0000 UTC m=+1072.179657337" lastFinishedPulling="2026-01-29 16:52:50.023486255 +0000 UTC m=+1092.424070899" observedRunningTime="2026-01-29 16:52:50.874395221 +0000 UTC m=+1093.274979865" watchObservedRunningTime="2026-01-29 16:52:50.876157168 +0000 UTC m=+1093.276741812" Jan 29 16:52:50 crc kubenswrapper[4746]: I0129 16:52:50.890938 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" podStartSLOduration=2.815737931 podStartE2EDuration="22.890919082s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.776219457 +0000 UTC m=+1072.176804101" lastFinishedPulling="2026-01-29 16:52:49.851400608 +0000 UTC m=+1092.251985252" observedRunningTime="2026-01-29 16:52:50.889469243 +0000 UTC m=+1093.290053887" watchObservedRunningTime="2026-01-29 16:52:50.890919082 +0000 UTC m=+1093.291503736" Jan 29 16:52:52 crc kubenswrapper[4746]: I0129 16:52:52.868301 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" event={"ID":"8f22c3e7-ca4b-471c-8c28-6d817f25582d","Type":"ContainerStarted","Data":"ad8e769455ecf03fe14c3f76d7a8832de558b1a336033dc629726e3d58df60da"} Jan 29 16:52:52 crc kubenswrapper[4746]: I0129 16:52:52.885855 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" podStartSLOduration=20.506600409 podStartE2EDuration="25.885835857s" podCreationTimestamp="2026-01-29 16:52:27 +0000 UTC" firstStartedPulling="2026-01-29 16:52:44.502860118 +0000 UTC m=+1086.903444762" lastFinishedPulling="2026-01-29 16:52:49.882095566 +0000 UTC m=+1092.282680210" observedRunningTime="2026-01-29 16:52:50.905411918 +0000 UTC m=+1093.305996562" watchObservedRunningTime="2026-01-29 16:52:52.885835857 +0000 UTC m=+1095.286420501" Jan 29 16:52:52 crc kubenswrapper[4746]: I0129 16:52:52.885995 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lw9ng" podStartSLOduration=2.508785739 podStartE2EDuration="24.885990251s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:52:29.782293558 +0000 UTC m=+1072.182878192" lastFinishedPulling="2026-01-29 16:52:52.15949804 +0000 UTC m=+1094.560082704" observedRunningTime="2026-01-29 16:52:52.882381525 +0000 UTC m=+1095.282966159" watchObservedRunningTime="2026-01-29 16:52:52.885990251 +0000 UTC m=+1095.286574895" Jan 29 16:52:58 crc kubenswrapper[4746]: I0129 16:52:58.594918 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-t88fd" Jan 29 16:52:58 crc kubenswrapper[4746]: I0129 16:52:58.676027 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-x4k9t" Jan 29 16:52:58 crc kubenswrapper[4746]: I0129 16:52:58.691716 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ls2m7" Jan 29 16:52:58 crc kubenswrapper[4746]: I0129 16:52:58.762928 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-cdm9l" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.137368 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.148552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38062c77-be0b-4138-b77e-330e2dd20cc0-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5\" (UID: \"38062c77-be0b-4138-b77e-330e2dd20cc0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.372560 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-c8dlx" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.381654 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.442074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.443061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.450591 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.465586 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4157a634-ad19-42ac-9ef4-7249fe50798f-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-fffl2\" (UID: \"4157a634-ad19-42ac-9ef4-7249fe50798f\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.619511 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4d284" Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.627483 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:00 crc kubenswrapper[4746]: W0129 16:53:00.897633 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38062c77_be0b_4138_b77e_330e2dd20cc0.slice/crio-5e1295e32ae9fb968bc77125e79d98f7cc3fc3f53d4a9d3c04baad9e0944eb49 WatchSource:0}: Error finding container 5e1295e32ae9fb968bc77125e79d98f7cc3fc3f53d4a9d3c04baad9e0944eb49: Status 404 returned error can't find the container with id 5e1295e32ae9fb968bc77125e79d98f7cc3fc3f53d4a9d3c04baad9e0944eb49 Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.898652 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2"] Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.905980 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5"] Jan 29 16:53:00 crc kubenswrapper[4746]: I0129 16:53:00.986968 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:53:01 crc kubenswrapper[4746]: I0129 16:53:01.406956 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" event={"ID":"38062c77-be0b-4138-b77e-330e2dd20cc0","Type":"ContainerStarted","Data":"5e1295e32ae9fb968bc77125e79d98f7cc3fc3f53d4a9d3c04baad9e0944eb49"} Jan 29 16:53:01 crc kubenswrapper[4746]: I0129 16:53:01.408681 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" event={"ID":"4157a634-ad19-42ac-9ef4-7249fe50798f","Type":"ContainerStarted","Data":"2041aa7fdf0638338961a829f52fadf9a2dc3366b1cd5c2202cd1988db51bf71"} Jan 29 16:53:02 crc kubenswrapper[4746]: I0129 16:53:02.420766 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" event={"ID":"4157a634-ad19-42ac-9ef4-7249fe50798f","Type":"ContainerStarted","Data":"850cff313ba8863e4f98d8d5954c28cc402d4cafe395264060c9fceec29835df"} Jan 29 16:53:04 crc kubenswrapper[4746]: I0129 16:53:04.118712 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-q94c9" Jan 29 16:53:08 crc kubenswrapper[4746]: I0129 16:53:08.458679 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:08 crc kubenswrapper[4746]: I0129 16:53:08.465590 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" Jan 29 16:53:08 crc kubenswrapper[4746]: I0129 16:53:08.529043 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-fffl2" podStartSLOduration=40.529022623 podStartE2EDuration="40.529022623s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:08.519163501 +0000 UTC m=+1110.919748155" watchObservedRunningTime="2026-01-29 16:53:08.529022623 +0000 UTC m=+1110.929607267" Jan 29 16:53:12 crc kubenswrapper[4746]: E0129 16:53:12.862089 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3707779137/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:89f6fd332fabefd2fff5619432986b37c1c6d197dd1c510f21dfe4609939b8a6" Jan 29 16:53:12 crc kubenswrapper[4746]: E0129 16:53:12.862984 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:89f6fd332fabefd2fff5619432986b37c1c6d197dd1c510f21dfe4609939b8a6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:035a6bd701c0b425f3e026163f3f05d62e59654048715c9e3a7bf36c80f1042c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:9d2f107ddcf79172d7d1c8409c51134b63baf7f3a4c98a5d48cd8a3ef4007d02,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:828e2158704d4954145386c2ef8d02a98d34f9e4170fdec3cb0e6de4c955ca92,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:99e01b5a4f51eedf0ee75c6547df083306a134a002d86525c78f20884a196edd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:90842ef1767955e1d17ae60e97fb0673620d54da6581261c6b6dad3a1e6ec684,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:14bea235240279c25119a3d10eb3067c5e290255d1eecdd367c212f5d5b7b3c7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:95e30a9e9ee5f2452d2dfa3855f9e8fdaf49020bb364c6875c2cdae8579f6e5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:bbbcfd8e609e47933eb08c75155aa14d52dd4b88394116079ed4d208191909f9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:cbcf0135796a182e108841d691d17310dc2ec027b1246efe9688ce20d44cbf00,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:1c6cd5a7343867a8f7115ef5044bfa715eaf90cc4d5801928a9fe0e7f3640487,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p82ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5_openstack-operators(38062c77-be0b-4138-b77e-330e2dd20cc0): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3707779137/1\": happened during read: context canceled" logger="UnhandledError" Jan 29 16:53:12 crc kubenswrapper[4746]: E0129 16:53:12.864330 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3707779137/1\\\": happened during read: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" podUID="38062c77-be0b-4138-b77e-330e2dd20cc0" Jan 29 16:53:13 crc kubenswrapper[4746]: E0129 16:53:13.492711 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:89f6fd332fabefd2fff5619432986b37c1c6d197dd1c510f21dfe4609939b8a6\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" podUID="38062c77-be0b-4138-b77e-330e2dd20cc0" Jan 29 16:53:19 crc kubenswrapper[4746]: I0129 16:53:19.064993 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:53:19 crc kubenswrapper[4746]: I0129 16:53:19.065993 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:53:29 crc kubenswrapper[4746]: I0129 16:53:29.621473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" event={"ID":"38062c77-be0b-4138-b77e-330e2dd20cc0","Type":"ContainerStarted","Data":"dc9b77e229d222773bdf6ceec789cf7639ed15959c2b8287aa210a29af5d35fb"} Jan 29 16:53:29 crc kubenswrapper[4746]: I0129 16:53:29.622199 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:53:29 crc kubenswrapper[4746]: I0129 16:53:29.647863 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" podStartSLOduration=33.522433205 podStartE2EDuration="1m1.647838891s" podCreationTimestamp="2026-01-29 16:52:28 +0000 UTC" firstStartedPulling="2026-01-29 16:53:00.986682427 +0000 UTC m=+1103.387267081" lastFinishedPulling="2026-01-29 16:53:29.112088123 +0000 UTC m=+1131.512672767" observedRunningTime="2026-01-29 16:53:29.643177616 +0000 UTC m=+1132.043762270" watchObservedRunningTime="2026-01-29 16:53:29.647838891 +0000 UTC m=+1132.048423545" Jan 29 16:53:40 crc kubenswrapper[4746]: I0129 16:53:40.394052 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5" Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.065751 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.066379 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.066426 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.067114 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f497afed52a8e95c6830b33adef89933088f61ef0f396f26bc62e5bc61330609"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.067179 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://f497afed52a8e95c6830b33adef89933088f61ef0f396f26bc62e5bc61330609" gracePeriod=600 Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.758206 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="f497afed52a8e95c6830b33adef89933088f61ef0f396f26bc62e5bc61330609" exitCode=0 Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.758227 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"f497afed52a8e95c6830b33adef89933088f61ef0f396f26bc62e5bc61330609"} Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.758555 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"c1bf44a70454193334b73bbbaa8e59d7b095d5f8d7c6a3569af1049d7583b251"} Jan 29 16:53:49 crc kubenswrapper[4746]: I0129 16:53:49.758577 4746 scope.go:117] "RemoveContainer" containerID="3638d7699d354888da89723ea0a7801e67c37af27cf4d7fc2d221d9637b01dae" Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.938244 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dmpf5"] Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.940303 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.945138 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-f6ssz" Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.945384 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.945410 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.945549 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 16:53:54 crc kubenswrapper[4746]: I0129 16:53:54.947012 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dmpf5"] Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.009401 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-r6btd"] Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.011044 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.013984 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.024623 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-r6btd"] Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.024641 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac52906-0c62-48db-8a62-c4080a9bd31a-config\") pod \"dnsmasq-dns-84bb9d8bd9-dmpf5\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.024862 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp9rk\" (UniqueName: \"kubernetes.io/projected/3ac52906-0c62-48db-8a62-c4080a9bd31a-kube-api-access-xp9rk\") pod \"dnsmasq-dns-84bb9d8bd9-dmpf5\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.125949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-dns-svc\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.126002 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvj8r\" (UniqueName: \"kubernetes.io/projected/ef547608-42e6-43ce-9261-aac355f656c1-kube-api-access-bvj8r\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.126034 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp9rk\" (UniqueName: \"kubernetes.io/projected/3ac52906-0c62-48db-8a62-c4080a9bd31a-kube-api-access-xp9rk\") pod \"dnsmasq-dns-84bb9d8bd9-dmpf5\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.126096 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-config\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.126134 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac52906-0c62-48db-8a62-c4080a9bd31a-config\") pod \"dnsmasq-dns-84bb9d8bd9-dmpf5\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.127048 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac52906-0c62-48db-8a62-c4080a9bd31a-config\") pod \"dnsmasq-dns-84bb9d8bd9-dmpf5\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.145223 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp9rk\" (UniqueName: \"kubernetes.io/projected/3ac52906-0c62-48db-8a62-c4080a9bd31a-kube-api-access-xp9rk\") pod \"dnsmasq-dns-84bb9d8bd9-dmpf5\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.227790 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-dns-svc\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.227845 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvj8r\" (UniqueName: \"kubernetes.io/projected/ef547608-42e6-43ce-9261-aac355f656c1-kube-api-access-bvj8r\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.227897 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-config\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.228910 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-config\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.228935 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-dns-svc\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.244466 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvj8r\" (UniqueName: \"kubernetes.io/projected/ef547608-42e6-43ce-9261-aac355f656c1-kube-api-access-bvj8r\") pod \"dnsmasq-dns-5f854695bc-r6btd\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.260344 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.332073 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.527039 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dmpf5"] Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.810288 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" event={"ID":"3ac52906-0c62-48db-8a62-c4080a9bd31a","Type":"ContainerStarted","Data":"e35a256dfc01329b14400e0c5daefdedbde0522f6f101630cf9c3ed3752d2671"} Jan 29 16:53:55 crc kubenswrapper[4746]: I0129 16:53:55.831998 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-r6btd"] Jan 29 16:53:55 crc kubenswrapper[4746]: W0129 16:53:55.835139 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef547608_42e6_43ce_9261_aac355f656c1.slice/crio-43ec50246304ea23ccf9226da128fd44349d1d59aa97c4b23cef3b98efc263de WatchSource:0}: Error finding container 43ec50246304ea23ccf9226da128fd44349d1d59aa97c4b23cef3b98efc263de: Status 404 returned error can't find the container with id 43ec50246304ea23ccf9226da128fd44349d1d59aa97c4b23cef3b98efc263de Jan 29 16:53:56 crc kubenswrapper[4746]: I0129 16:53:56.822645 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" event={"ID":"ef547608-42e6-43ce-9261-aac355f656c1","Type":"ContainerStarted","Data":"43ec50246304ea23ccf9226da128fd44349d1d59aa97c4b23cef3b98efc263de"} Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.124908 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-r6btd"] Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.163855 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-f8x92"] Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.166323 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.192915 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-f8x92"] Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.268080 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcd47\" (UniqueName: \"kubernetes.io/projected/24cd3241-a45f-4261-b674-eba5b6ff7b41-kube-api-access-tcd47\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.268173 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-config\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.268210 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.370032 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcd47\" (UniqueName: \"kubernetes.io/projected/24cd3241-a45f-4261-b674-eba5b6ff7b41-kube-api-access-tcd47\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.370827 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-config\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.370870 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.371284 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-config\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.377843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.394692 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcd47\" (UniqueName: \"kubernetes.io/projected/24cd3241-a45f-4261-b674-eba5b6ff7b41-kube-api-access-tcd47\") pod \"dnsmasq-dns-c7cbb8f79-f8x92\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.509237 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.889305 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dmpf5"] Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.914112 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-5m2gz"] Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.915624 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.920755 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-5m2gz"] Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.987772 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-config\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.987820 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bh72\" (UniqueName: \"kubernetes.io/projected/4eb70032-cdd8-4ab3-b927-e839d4807e7b-kube-api-access-9bh72\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:57 crc kubenswrapper[4746]: I0129 16:53:57.987841 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-dns-svc\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.089399 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-config\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.089462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bh72\" (UniqueName: \"kubernetes.io/projected/4eb70032-cdd8-4ab3-b927-e839d4807e7b-kube-api-access-9bh72\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.089483 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-dns-svc\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.090446 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-config\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.090549 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-dns-svc\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.117415 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bh72\" (UniqueName: \"kubernetes.io/projected/4eb70032-cdd8-4ab3-b927-e839d4807e7b-kube-api-access-9bh72\") pod \"dnsmasq-dns-95f5f6995-5m2gz\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.230261 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-f8x92"] Jan 29 16:53:58 crc kubenswrapper[4746]: W0129 16:53:58.265589 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24cd3241_a45f_4261_b674_eba5b6ff7b41.slice/crio-ada9664f31c33431f691be7f0ba3a47e10b45ea5365cbbcab0fd05becf99a51f WatchSource:0}: Error finding container ada9664f31c33431f691be7f0ba3a47e10b45ea5365cbbcab0fd05becf99a51f: Status 404 returned error can't find the container with id ada9664f31c33431f691be7f0ba3a47e10b45ea5365cbbcab0fd05becf99a51f Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.271160 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.324346 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.327560 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.331578 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.331584 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.332056 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.332368 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tfxrx" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.332456 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.333272 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.335276 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.336957 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.406998 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407064 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407113 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407150 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407175 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srb69\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-kube-api-access-srb69\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407252 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407310 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407333 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407368 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.407395 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510129 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510175 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510243 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510275 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510323 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510353 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srb69\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-kube-api-access-srb69\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510452 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.510477 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.512127 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.514542 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.514548 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.515747 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.516707 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.517638 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.525822 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.525591 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.526380 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.533988 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srb69\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-kube-api-access-srb69\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.535358 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.555433 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.673693 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.860947 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-5m2gz"] Jan 29 16:53:58 crc kubenswrapper[4746]: I0129 16:53:58.863565 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" event={"ID":"24cd3241-a45f-4261-b674-eba5b6ff7b41","Type":"ContainerStarted","Data":"ada9664f31c33431f691be7f0ba3a47e10b45ea5365cbbcab0fd05becf99a51f"} Jan 29 16:53:58 crc kubenswrapper[4746]: W0129 16:53:58.868345 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eb70032_cdd8_4ab3_b927_e839d4807e7b.slice/crio-bff5790f8ca474a4cc7dbd571da5f0322981c481cd775f292ad05620c15f28ef WatchSource:0}: Error finding container bff5790f8ca474a4cc7dbd571da5f0322981c481cd775f292ad05620c15f28ef: Status 404 returned error can't find the container with id bff5790f8ca474a4cc7dbd571da5f0322981c481cd775f292ad05620c15f28ef Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.045436 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.051098 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.054909 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.055045 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.055127 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.055263 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.055928 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zkgln" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.056058 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.057230 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.083107 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.134957 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz4l2\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-kube-api-access-cz4l2\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.135015 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.135055 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.135093 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c96526-7c37-42c2-896e-b551dd6ed5b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.135116 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c96526-7c37-42c2-896e-b551dd6ed5b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.135140 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.135463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.136308 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.136352 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.136439 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.136546 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.241556 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243092 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243206 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz4l2\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-kube-api-access-cz4l2\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243286 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243340 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243458 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c96526-7c37-42c2-896e-b551dd6ed5b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243527 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c96526-7c37-42c2-896e-b551dd6ed5b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243641 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243736 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243803 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.243875 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.244505 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.245400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.245974 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.249627 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.249681 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.250386 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.256214 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.256755 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c96526-7c37-42c2-896e-b551dd6ed5b8-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.257656 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.263289 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c96526-7c37-42c2-896e-b551dd6ed5b8-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.271141 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz4l2\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-kube-api-access-cz4l2\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.292152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.411128 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.497016 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:53:59 crc kubenswrapper[4746]: W0129 16:53:59.536261 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b6e0a39_5c0e_4632_bc24_dd8c7eb25788.slice/crio-6e3e128dbba555ec4c780af1e913290c42f8c71e02b73ce12f0257e660f557b5 WatchSource:0}: Error finding container 6e3e128dbba555ec4c780af1e913290c42f8c71e02b73ce12f0257e660f557b5: Status 404 returned error can't find the container with id 6e3e128dbba555ec4c780af1e913290c42f8c71e02b73ce12f0257e660f557b5 Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.843996 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:53:59 crc kubenswrapper[4746]: W0129 16:53:59.874716 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c96526_7c37_42c2_896e_b551dd6ed5b8.slice/crio-cf5729bd3a486d44a7f78af891a024c22f7b6654529830bd75f2d4e5b8ae9ac7 WatchSource:0}: Error finding container cf5729bd3a486d44a7f78af891a024c22f7b6654529830bd75f2d4e5b8ae9ac7: Status 404 returned error can't find the container with id cf5729bd3a486d44a7f78af891a024c22f7b6654529830bd75f2d4e5b8ae9ac7 Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.884140 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" event={"ID":"4eb70032-cdd8-4ab3-b927-e839d4807e7b","Type":"ContainerStarted","Data":"bff5790f8ca474a4cc7dbd571da5f0322981c481cd775f292ad05620c15f28ef"} Jan 29 16:53:59 crc kubenswrapper[4746]: I0129 16:53:59.889538 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788","Type":"ContainerStarted","Data":"6e3e128dbba555ec4c780af1e913290c42f8c71e02b73ce12f0257e660f557b5"} Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.433368 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.455656 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.460604 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.460900 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.462131 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-29p5x" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.462331 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.468880 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.478211 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.574465 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.574521 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kolla-config\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.574550 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.574849 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.575033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcw8\" (UniqueName: \"kubernetes.io/projected/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kube-api-access-qpcw8\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.575168 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.575292 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.575377 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-default\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.679846 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kolla-config\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.679898 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.679935 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.679974 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpcw8\" (UniqueName: \"kubernetes.io/projected/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kube-api-access-qpcw8\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.679999 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.680023 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.680049 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-default\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.680110 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.683070 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.684038 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kolla-config\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.684262 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.692048 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-default\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.692125 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.698026 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.720286 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.735783 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpcw8\" (UniqueName: \"kubernetes.io/projected/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kube-api-access-qpcw8\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.741808 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.815991 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:54:00 crc kubenswrapper[4746]: I0129 16:54:00.923001 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71c96526-7c37-42c2-896e-b551dd6ed5b8","Type":"ContainerStarted","Data":"cf5729bd3a486d44a7f78af891a024c22f7b6654529830bd75f2d4e5b8ae9ac7"} Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.647762 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.829728 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.830945 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.831033 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.840513 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nlf95" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.840764 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.840946 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.841175 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.912958 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.913082 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.913146 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.913234 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.913261 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm97q\" (UniqueName: \"kubernetes.io/projected/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kube-api-access-nm97q\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.913520 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.917448 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.917537 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.954093 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.956855 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.962400 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.962455 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-x4df6" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.962798 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.971219 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 16:54:01 crc kubenswrapper[4746]: I0129 16:54:01.973359 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"717a3fe2-fd76-47c2-b7f2-859dd5186f9c","Type":"ContainerStarted","Data":"0eec810ea8f3f534e68b7ca792c37994761f6474f4dac857c3015895a744b0ed"} Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.019882 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.019987 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020040 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-config-data\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020137 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7swmr\" (UniqueName: \"kubernetes.io/projected/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kube-api-access-7swmr\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020274 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020349 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm97q\" (UniqueName: \"kubernetes.io/projected/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kube-api-access-nm97q\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020382 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.020428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.021059 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.021944 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.022161 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.022225 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.022301 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.022335 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kolla-config\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.023730 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.027010 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.027114 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.030789 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.034534 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.040303 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm97q\" (UniqueName: \"kubernetes.io/projected/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kube-api-access-nm97q\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.065428 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.130694 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7swmr\" (UniqueName: \"kubernetes.io/projected/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kube-api-access-7swmr\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.131348 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.131378 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kolla-config\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.131428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.131467 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-config-data\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.136348 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.137353 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-config-data\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.138276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kolla-config\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.153812 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.155063 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7swmr\" (UniqueName: \"kubernetes.io/projected/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kube-api-access-7swmr\") pod \"memcached-0\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.166386 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.304547 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.856933 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.968438 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 16:54:02 crc kubenswrapper[4746]: W0129 16:54:02.978430 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb931fc5d_d5c3_429f_9c40_073a56aed3ba.slice/crio-aeb70121e546a3a95569fc239d4e603ec65c4e29bff7728db4450182ad58056a WatchSource:0}: Error finding container aeb70121e546a3a95569fc239d4e603ec65c4e29bff7728db4450182ad58056a: Status 404 returned error can't find the container with id aeb70121e546a3a95569fc239d4e603ec65c4e29bff7728db4450182ad58056a Jan 29 16:54:02 crc kubenswrapper[4746]: I0129 16:54:02.991180 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f5617cc-a91a-4eb7-83d9-25f01bcb890c","Type":"ContainerStarted","Data":"024a529f91d69d5cf5a9f12dd61efc57c8420b0fc6d3303d5cdbf2aacc49ebc1"} Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.740347 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.741375 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.749687 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bdlb7" Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.768539 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.784252 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smzj\" (UniqueName: \"kubernetes.io/projected/0439cd84-86aa-425b-91d1-5ab5a68e3210-kube-api-access-2smzj\") pod \"kube-state-metrics-0\" (UID: \"0439cd84-86aa-425b-91d1-5ab5a68e3210\") " pod="openstack/kube-state-metrics-0" Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.896593 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2smzj\" (UniqueName: \"kubernetes.io/projected/0439cd84-86aa-425b-91d1-5ab5a68e3210-kube-api-access-2smzj\") pod \"kube-state-metrics-0\" (UID: \"0439cd84-86aa-425b-91d1-5ab5a68e3210\") " pod="openstack/kube-state-metrics-0" Jan 29 16:54:03 crc kubenswrapper[4746]: I0129 16:54:03.934403 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2smzj\" (UniqueName: \"kubernetes.io/projected/0439cd84-86aa-425b-91d1-5ab5a68e3210-kube-api-access-2smzj\") pod \"kube-state-metrics-0\" (UID: \"0439cd84-86aa-425b-91d1-5ab5a68e3210\") " pod="openstack/kube-state-metrics-0" Jan 29 16:54:04 crc kubenswrapper[4746]: I0129 16:54:04.041534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b931fc5d-d5c3-429f-9c40-073a56aed3ba","Type":"ContainerStarted","Data":"aeb70121e546a3a95569fc239d4e603ec65c4e29bff7728db4450182ad58056a"} Jan 29 16:54:04 crc kubenswrapper[4746]: I0129 16:54:04.068575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.222332 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pplw4"] Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.224221 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.226608 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.226788 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nwg9f" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.227123 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.244084 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pplw4"] Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277207 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-combined-ca-bundle\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277272 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-log-ovn\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277300 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb4gq\" (UniqueName: \"kubernetes.io/projected/9d0831ca-9258-426a-b0d5-9ae88e24daa2-kube-api-access-lb4gq\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277381 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-ovn-controller-tls-certs\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277414 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run-ovn\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.277500 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d0831ca-9258-426a-b0d5-9ae88e24daa2-scripts\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.296843 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hlgxj"] Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.305398 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.312900 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hlgxj"] Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.378735 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-etc-ovs\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379129 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-log\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379168 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-ovn-controller-tls-certs\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379216 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-lib\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run-ovn\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx8qt\" (UniqueName: \"kubernetes.io/projected/db69fbf3-38bd-403b-b1e6-fbd724d15250-kube-api-access-dx8qt\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379368 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-run\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379396 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d0831ca-9258-426a-b0d5-9ae88e24daa2-scripts\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379492 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-combined-ca-bundle\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379516 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-log-ovn\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379538 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb4gq\" (UniqueName: \"kubernetes.io/projected/9d0831ca-9258-426a-b0d5-9ae88e24daa2-kube-api-access-lb4gq\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379566 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db69fbf3-38bd-403b-b1e6-fbd724d15250-scripts\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379718 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run-ovn\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.379924 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-log-ovn\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.380375 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.381728 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d0831ca-9258-426a-b0d5-9ae88e24daa2-scripts\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.385489 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-ovn-controller-tls-certs\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.385814 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-combined-ca-bundle\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.396358 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb4gq\" (UniqueName: \"kubernetes.io/projected/9d0831ca-9258-426a-b0d5-9ae88e24daa2-kube-api-access-lb4gq\") pod \"ovn-controller-pplw4\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.481830 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx8qt\" (UniqueName: \"kubernetes.io/projected/db69fbf3-38bd-403b-b1e6-fbd724d15250-kube-api-access-dx8qt\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483046 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-run\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483133 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db69fbf3-38bd-403b-b1e6-fbd724d15250-scripts\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-run\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483180 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-etc-ovs\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483596 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-log\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483634 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-lib\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483645 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-etc-ovs\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483870 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-log\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.483896 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-lib\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.486325 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db69fbf3-38bd-403b-b1e6-fbd724d15250-scripts\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.500220 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx8qt\" (UniqueName: \"kubernetes.io/projected/db69fbf3-38bd-403b-b1e6-fbd724d15250-kube-api-access-dx8qt\") pod \"ovn-controller-ovs-hlgxj\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.546964 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4" Jan 29 16:54:07 crc kubenswrapper[4746]: I0129 16:54:07.636556 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.803047 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.804820 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.807381 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-w787v" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.808930 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.813751 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.813951 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.815652 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.825278 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847643 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847698 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847754 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshwf\" (UniqueName: \"kubernetes.io/projected/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-kube-api-access-tshwf\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847800 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847835 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847855 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.847883 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950287 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950319 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950348 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950372 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tshwf\" (UniqueName: \"kubernetes.io/projected/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-kube-api-access-tshwf\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950424 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950458 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.950479 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.951609 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.952082 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-config\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.952817 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.954748 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.957370 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.958004 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.965928 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:10 crc kubenswrapper[4746]: I0129 16:54:10.980215 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tshwf\" (UniqueName: \"kubernetes.io/projected/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-kube-api-access-tshwf\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.009500 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.077478 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.079175 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.083016 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wr9xn" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.083107 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.083981 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.084045 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.084385 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.135604 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154320 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154402 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154443 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154505 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154559 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99th\" (UniqueName: \"kubernetes.io/projected/56be91c6-da82-45b5-9b98-d5b6f05f244e-kube-api-access-j99th\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154590 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-config\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.154623 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261468 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261584 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261652 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99th\" (UniqueName: \"kubernetes.io/projected/56be91c6-da82-45b5-9b98-d5b6f05f244e-kube-api-access-j99th\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261683 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-config\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261719 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261766 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261794 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.261845 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.262954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.262953 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-config\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.267512 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.268937 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.270537 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.286060 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.293105 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99th\" (UniqueName: \"kubernetes.io/projected/56be91c6-da82-45b5-9b98-d5b6f05f244e-kube-api-access-j99th\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.295839 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:11 crc kubenswrapper[4746]: I0129 16:54:11.403014 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:26 crc kubenswrapper[4746]: E0129 16:54:26.267941 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 16:54:26 crc kubenswrapper[4746]: E0129 16:54:26.268678 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpcw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(717a3fe2-fd76-47c2-b7f2-859dd5186f9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:26 crc kubenswrapper[4746]: E0129 16:54:26.269950 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.245453 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.246084 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.246324 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srb69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(6b6e0a39-5c0e-4632-bc24-dd8c7eb25788): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.247946 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.248520 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.248639 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm97q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(1f5617cc-a91a-4eb7-83d9-25f01bcb890c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:27 crc kubenswrapper[4746]: E0129 16:54:27.250608 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" Jan 29 16:54:28 crc kubenswrapper[4746]: E0129 16:54:28.251242 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" Jan 29 16:54:28 crc kubenswrapper[4746]: E0129 16:54:28.251391 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" Jan 29 16:54:31 crc kubenswrapper[4746]: E0129 16:54:31.798435 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 16:54:31 crc kubenswrapper[4746]: E0129 16:54:31.798929 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cz4l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(71c96526-7c37-42c2-896e-b551dd6ed5b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:31 crc kubenswrapper[4746]: E0129 16:54:31.800163 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" Jan 29 16:54:32 crc kubenswrapper[4746]: E0129 16:54:32.295841 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" Jan 29 16:54:32 crc kubenswrapper[4746]: E0129 16:54:32.512410 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 29 16:54:32 crc kubenswrapper[4746]: E0129 16:54:32.512588 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n68fh54chfh9dh59dh54fh5b6h688h665h594h5d4h5cfh96h659hd8hb9h58bh555hd8h668h689h5f4h57dh7dh7fh59ch65ch76h68hfdh658h5bbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7swmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(b931fc5d-d5c3-429f-9c40-073a56aed3ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:32 crc kubenswrapper[4746]: E0129 16:54:32.513840 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.307133 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.371641 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.371827 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvj8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-r6btd_openstack(ef547608-42e6-43ce-9261-aac355f656c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.372742 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.372812 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xp9rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-dmpf5_openstack(3ac52906-0c62-48db-8a62-c4080a9bd31a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.373925 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" podUID="3ac52906-0c62-48db-8a62-c4080a9bd31a" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.373972 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" podUID="ef547608-42e6-43ce-9261-aac355f656c1" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.404911 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.406707 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcd47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-f8x92_openstack(24cd3241-a45f-4261-b674-eba5b6ff7b41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.408576 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" podUID="24cd3241-a45f-4261-b674-eba5b6ff7b41" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.443449 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.443594 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bh72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-5m2gz_openstack(4eb70032-cdd8-4ab3-b927-e839d4807e7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:33 crc kubenswrapper[4746]: E0129 16:54:33.444963 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" podUID="4eb70032-cdd8-4ab3-b927-e839d4807e7b" Jan 29 16:54:33 crc kubenswrapper[4746]: I0129 16:54:33.802104 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pplw4"] Jan 29 16:54:33 crc kubenswrapper[4746]: I0129 16:54:33.824390 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:54:34 crc kubenswrapper[4746]: I0129 16:54:34.046992 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hlgxj"] Jan 29 16:54:34 crc kubenswrapper[4746]: W0129 16:54:34.057554 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb69fbf3_38bd_403b_b1e6_fbd724d15250.slice/crio-901fc0bf3b78a3ffedb25a61b7454f5d6b0326cbc8ff850830b2ff42d479a117 WatchSource:0}: Error finding container 901fc0bf3b78a3ffedb25a61b7454f5d6b0326cbc8ff850830b2ff42d479a117: Status 404 returned error can't find the container with id 901fc0bf3b78a3ffedb25a61b7454f5d6b0326cbc8ff850830b2ff42d479a117 Jan 29 16:54:34 crc kubenswrapper[4746]: I0129 16:54:34.316632 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerStarted","Data":"901fc0bf3b78a3ffedb25a61b7454f5d6b0326cbc8ff850830b2ff42d479a117"} Jan 29 16:54:34 crc kubenswrapper[4746]: I0129 16:54:34.325853 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0439cd84-86aa-425b-91d1-5ab5a68e3210","Type":"ContainerStarted","Data":"fea67c55f1bff5d4dc42a5916e535df159d5f73d1d2a43970e6149e07b525916"} Jan 29 16:54:34 crc kubenswrapper[4746]: I0129 16:54:34.327858 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4" event={"ID":"9d0831ca-9258-426a-b0d5-9ae88e24daa2","Type":"ContainerStarted","Data":"3280aea4a753fd21e0bd1b9fd9444acee1b1712f409c4d01fd7d4fb20141f833"} Jan 29 16:54:34 crc kubenswrapper[4746]: E0129 16:54:34.334585 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" podUID="4eb70032-cdd8-4ab3-b927-e839d4807e7b" Jan 29 16:54:34 crc kubenswrapper[4746]: E0129 16:54:34.334880 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" podUID="24cd3241-a45f-4261-b674-eba5b6ff7b41" Jan 29 16:54:34 crc kubenswrapper[4746]: I0129 16:54:34.806055 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.044868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.204315 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.210766 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.318125 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-dns-svc\") pod \"ef547608-42e6-43ce-9261-aac355f656c1\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.318239 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp9rk\" (UniqueName: \"kubernetes.io/projected/3ac52906-0c62-48db-8a62-c4080a9bd31a-kube-api-access-xp9rk\") pod \"3ac52906-0c62-48db-8a62-c4080a9bd31a\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.318391 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac52906-0c62-48db-8a62-c4080a9bd31a-config\") pod \"3ac52906-0c62-48db-8a62-c4080a9bd31a\" (UID: \"3ac52906-0c62-48db-8a62-c4080a9bd31a\") " Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.318500 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvj8r\" (UniqueName: \"kubernetes.io/projected/ef547608-42e6-43ce-9261-aac355f656c1-kube-api-access-bvj8r\") pod \"ef547608-42e6-43ce-9261-aac355f656c1\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.318539 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-config\") pod \"ef547608-42e6-43ce-9261-aac355f656c1\" (UID: \"ef547608-42e6-43ce-9261-aac355f656c1\") " Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.318984 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef547608-42e6-43ce-9261-aac355f656c1" (UID: "ef547608-42e6-43ce-9261-aac355f656c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.319117 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-config" (OuterVolumeSpecName: "config") pod "ef547608-42e6-43ce-9261-aac355f656c1" (UID: "ef547608-42e6-43ce-9261-aac355f656c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.319464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ac52906-0c62-48db-8a62-c4080a9bd31a-config" (OuterVolumeSpecName: "config") pod "3ac52906-0c62-48db-8a62-c4080a9bd31a" (UID: "3ac52906-0c62-48db-8a62-c4080a9bd31a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.325620 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac52906-0c62-48db-8a62-c4080a9bd31a-kube-api-access-xp9rk" (OuterVolumeSpecName: "kube-api-access-xp9rk") pod "3ac52906-0c62-48db-8a62-c4080a9bd31a" (UID: "3ac52906-0c62-48db-8a62-c4080a9bd31a"). InnerVolumeSpecName "kube-api-access-xp9rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.328553 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef547608-42e6-43ce-9261-aac355f656c1-kube-api-access-bvj8r" (OuterVolumeSpecName: "kube-api-access-bvj8r") pod "ef547608-42e6-43ce-9261-aac355f656c1" (UID: "ef547608-42e6-43ce-9261-aac355f656c1"). InnerVolumeSpecName "kube-api-access-bvj8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.335057 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"56be91c6-da82-45b5-9b98-d5b6f05f244e","Type":"ContainerStarted","Data":"5097f64a55ef9ce59ef3562295883e2ce21b691c3d93b381573caf41b0d08415"} Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.336172 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" event={"ID":"3ac52906-0c62-48db-8a62-c4080a9bd31a","Type":"ContainerDied","Data":"e35a256dfc01329b14400e0c5daefdedbde0522f6f101630cf9c3ed3752d2671"} Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.336199 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-dmpf5" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.337209 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4555f5c-9440-4402-96f9-e2bf40c5cfb1","Type":"ContainerStarted","Data":"1e51bb42588a10a699c38f4f4fb5f3c36d31e6ea69da67e5787d4cadad0b65bf"} Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.338182 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" event={"ID":"ef547608-42e6-43ce-9261-aac355f656c1","Type":"ContainerDied","Data":"43ec50246304ea23ccf9226da128fd44349d1d59aa97c4b23cef3b98efc263de"} Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.338253 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-r6btd" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.420949 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.420987 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp9rk\" (UniqueName: \"kubernetes.io/projected/3ac52906-0c62-48db-8a62-c4080a9bd31a-kube-api-access-xp9rk\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.421003 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ac52906-0c62-48db-8a62-c4080a9bd31a-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.421016 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvj8r\" (UniqueName: \"kubernetes.io/projected/ef547608-42e6-43ce-9261-aac355f656c1-kube-api-access-bvj8r\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.421029 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef547608-42e6-43ce-9261-aac355f656c1-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.427314 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dmpf5"] Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.435102 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-dmpf5"] Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.457812 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-r6btd"] Jan 29 16:54:35 crc kubenswrapper[4746]: I0129 16:54:35.473929 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-r6btd"] Jan 29 16:54:36 crc kubenswrapper[4746]: I0129 16:54:36.464624 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ac52906-0c62-48db-8a62-c4080a9bd31a" path="/var/lib/kubelet/pods/3ac52906-0c62-48db-8a62-c4080a9bd31a/volumes" Jan 29 16:54:36 crc kubenswrapper[4746]: I0129 16:54:36.465087 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef547608-42e6-43ce-9261-aac355f656c1" path="/var/lib/kubelet/pods/ef547608-42e6-43ce-9261-aac355f656c1/volumes" Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.378414 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4" event={"ID":"9d0831ca-9258-426a-b0d5-9ae88e24daa2","Type":"ContainerStarted","Data":"b6dcfeab99a5a8781df1a90e3a3c6cbe494b01e59a357c3e1aea216f06fcbe66"} Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.378812 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pplw4" Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.380789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"56be91c6-da82-45b5-9b98-d5b6f05f244e","Type":"ContainerStarted","Data":"853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830"} Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.383003 4746 generic.go:334] "Generic (PLEG): container finished" podID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerID="184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c" exitCode=0 Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.383085 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerDied","Data":"184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c"} Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.384910 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4555f5c-9440-4402-96f9-e2bf40c5cfb1","Type":"ContainerStarted","Data":"3126dbd2cf50d8c8e7a9683b6da26e8324aad907d712924fe7acecce195f923a"} Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.386834 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0439cd84-86aa-425b-91d1-5ab5a68e3210","Type":"ContainerStarted","Data":"ffc65679e605dec8adc14cf14da40b1627ad469087535fd49a93a48f43cd94dc"} Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.386951 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.396915 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pplw4" podStartSLOduration=28.062823862 podStartE2EDuration="32.396895923s" podCreationTimestamp="2026-01-29 16:54:07 +0000 UTC" firstStartedPulling="2026-01-29 16:54:33.847582015 +0000 UTC m=+1196.248166659" lastFinishedPulling="2026-01-29 16:54:38.181654076 +0000 UTC m=+1200.582238720" observedRunningTime="2026-01-29 16:54:39.395603149 +0000 UTC m=+1201.796187793" watchObservedRunningTime="2026-01-29 16:54:39.396895923 +0000 UTC m=+1201.797480567" Jan 29 16:54:39 crc kubenswrapper[4746]: I0129 16:54:39.438086 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=32.424645763 podStartE2EDuration="36.438056431s" podCreationTimestamp="2026-01-29 16:54:03 +0000 UTC" firstStartedPulling="2026-01-29 16:54:33.877417361 +0000 UTC m=+1196.278002005" lastFinishedPulling="2026-01-29 16:54:37.890828029 +0000 UTC m=+1200.291412673" observedRunningTime="2026-01-29 16:54:39.427449818 +0000 UTC m=+1201.828034472" watchObservedRunningTime="2026-01-29 16:54:39.438056431 +0000 UTC m=+1201.838641075" Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.400685 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerStarted","Data":"b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b"} Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.401033 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.401046 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.401054 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerStarted","Data":"ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032"} Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.407391 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4555f5c-9440-4402-96f9-e2bf40c5cfb1","Type":"ContainerStarted","Data":"67b114c6f1d8b35d3fbd8d9d423762318eaad81860bbb0ff538250cf11081b4c"} Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.410489 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"56be91c6-da82-45b5-9b98-d5b6f05f244e","Type":"ContainerStarted","Data":"3c2e531058476f465ac3dbbb01033f0d27b609383659cd5f42cf8efcfad81000"} Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.424321 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hlgxj" podStartSLOduration=29.592649579 podStartE2EDuration="33.424305489s" podCreationTimestamp="2026-01-29 16:54:07 +0000 UTC" firstStartedPulling="2026-01-29 16:54:34.059496138 +0000 UTC m=+1196.460080782" lastFinishedPulling="2026-01-29 16:54:37.891152048 +0000 UTC m=+1200.291736692" observedRunningTime="2026-01-29 16:54:40.42283613 +0000 UTC m=+1202.823420784" watchObservedRunningTime="2026-01-29 16:54:40.424305489 +0000 UTC m=+1202.824890133" Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.456314 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=26.781372829 podStartE2EDuration="31.456294502s" podCreationTimestamp="2026-01-29 16:54:09 +0000 UTC" firstStartedPulling="2026-01-29 16:54:35.111714186 +0000 UTC m=+1197.512298830" lastFinishedPulling="2026-01-29 16:54:39.786635859 +0000 UTC m=+1202.187220503" observedRunningTime="2026-01-29 16:54:40.453085777 +0000 UTC m=+1202.853670421" watchObservedRunningTime="2026-01-29 16:54:40.456294502 +0000 UTC m=+1202.856879146" Jan 29 16:54:40 crc kubenswrapper[4746]: I0129 16:54:40.478275 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.807369332 podStartE2EDuration="30.478259388s" podCreationTimestamp="2026-01-29 16:54:10 +0000 UTC" firstStartedPulling="2026-01-29 16:54:35.112278891 +0000 UTC m=+1197.512863535" lastFinishedPulling="2026-01-29 16:54:39.783168947 +0000 UTC m=+1202.183753591" observedRunningTime="2026-01-29 16:54:40.475402212 +0000 UTC m=+1202.875986856" watchObservedRunningTime="2026-01-29 16:54:40.478259388 +0000 UTC m=+1202.878844032" Jan 29 16:54:41 crc kubenswrapper[4746]: I0129 16:54:41.135731 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:41 crc kubenswrapper[4746]: I0129 16:54:41.135822 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:41 crc kubenswrapper[4746]: I0129 16:54:41.173269 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:41 crc kubenswrapper[4746]: I0129 16:54:41.404157 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:41 crc kubenswrapper[4746]: I0129 16:54:41.404224 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:41 crc kubenswrapper[4746]: I0129 16:54:41.439651 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:42 crc kubenswrapper[4746]: I0129 16:54:42.424406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f5617cc-a91a-4eb7-83d9-25f01bcb890c","Type":"ContainerStarted","Data":"283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c"} Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.433406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"717a3fe2-fd76-47c2-b7f2-859dd5186f9c","Type":"ContainerStarted","Data":"ae5a4edf6b68a4c05732cca45dbe163b03db7a46e160be1412e89340c7ef3b1d"} Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.434963 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788","Type":"ContainerStarted","Data":"560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255"} Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.474585 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.750569 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-5m2gz"] Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.810389 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-dxl4s"] Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.811627 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.821692 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.825364 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rm6fv"] Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.826644 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.829643 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.840818 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rm6fv"] Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.849359 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-dxl4s"] Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998674 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-config\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rpzq\" (UniqueName: \"kubernetes.io/projected/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-kube-api-access-9rpzq\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998742 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-dns-svc\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998775 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovn-rundir\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998823 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998847 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdjt\" (UniqueName: \"kubernetes.io/projected/990c722e-2e75-4a70-9825-0a17324ecac6-kube-api-access-pzdjt\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998869 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovs-rundir\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998903 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-config\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.998977 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-combined-ca-bundle\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:43 crc kubenswrapper[4746]: I0129 16:54:43.999016 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.050457 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-f8x92"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.088962 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-cjtpx"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.090558 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.094363 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100642 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100688 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdjt\" (UniqueName: \"kubernetes.io/projected/990c722e-2e75-4a70-9825-0a17324ecac6-kube-api-access-pzdjt\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100717 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovs-rundir\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100753 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-config\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100775 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-combined-ca-bundle\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100807 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100832 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-config\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100849 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rpzq\" (UniqueName: \"kubernetes.io/projected/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-kube-api-access-9rpzq\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-dns-svc\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.100893 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovn-rundir\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.101448 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovn-rundir\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.102623 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.103329 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovs-rundir\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.104295 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-config\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.104470 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-config\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.104976 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-dns-svc\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.107026 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.108081 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-combined-ca-bundle\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.109414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.110271 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-cjtpx"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.138337 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rpzq\" (UniqueName: \"kubernetes.io/projected/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-kube-api-access-9rpzq\") pod \"ovn-controller-metrics-rm6fv\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.145875 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdjt\" (UniqueName: \"kubernetes.io/projected/990c722e-2e75-4a70-9825-0a17324ecac6-kube-api-access-pzdjt\") pod \"dnsmasq-dns-794868bd45-dxl4s\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.149507 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.164753 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.201998 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.203857 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvqg\" (UniqueName: \"kubernetes.io/projected/b4a743f8-e233-41b7-bd9b-ea84be94cf13-kube-api-access-rfvqg\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.203972 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.203995 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-config\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.204057 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.205288 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.308826 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bh72\" (UniqueName: \"kubernetes.io/projected/4eb70032-cdd8-4ab3-b927-e839d4807e7b-kube-api-access-9bh72\") pod \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309057 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-dns-svc\") pod \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309125 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-config\") pod \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\" (UID: \"4eb70032-cdd8-4ab3-b927-e839d4807e7b\") " Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309344 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvqg\" (UniqueName: \"kubernetes.io/projected/b4a743f8-e233-41b7-bd9b-ea84be94cf13-kube-api-access-rfvqg\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309378 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309394 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-config\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309425 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.309458 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.310383 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.311594 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4eb70032-cdd8-4ab3-b927-e839d4807e7b" (UID: "4eb70032-cdd8-4ab3-b927-e839d4807e7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.311958 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-config" (OuterVolumeSpecName: "config") pod "4eb70032-cdd8-4ab3-b927-e839d4807e7b" (UID: "4eb70032-cdd8-4ab3-b927-e839d4807e7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.312724 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-config\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.315678 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.316349 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb70032-cdd8-4ab3-b927-e839d4807e7b-kube-api-access-9bh72" (OuterVolumeSpecName: "kube-api-access-9bh72") pod "4eb70032-cdd8-4ab3-b927-e839d4807e7b" (UID: "4eb70032-cdd8-4ab3-b927-e839d4807e7b"). InnerVolumeSpecName "kube-api-access-9bh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.332538 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.343961 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvqg\" (UniqueName: \"kubernetes.io/projected/b4a743f8-e233-41b7-bd9b-ea84be94cf13-kube-api-access-rfvqg\") pod \"dnsmasq-dns-757dc6fff9-cjtpx\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.395208 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.410723 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bh72\" (UniqueName: \"kubernetes.io/projected/4eb70032-cdd8-4ab3-b927-e839d4807e7b-kube-api-access-9bh72\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.410751 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.410777 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4eb70032-cdd8-4ab3-b927-e839d4807e7b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.444693 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.455637 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.498877 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.515596 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-config\") pod \"24cd3241-a45f-4261-b674-eba5b6ff7b41\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.515835 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-dns-svc\") pod \"24cd3241-a45f-4261-b674-eba5b6ff7b41\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.515864 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcd47\" (UniqueName: \"kubernetes.io/projected/24cd3241-a45f-4261-b674-eba5b6ff7b41-kube-api-access-tcd47\") pod \"24cd3241-a45f-4261-b674-eba5b6ff7b41\" (UID: \"24cd3241-a45f-4261-b674-eba5b6ff7b41\") " Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.516329 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-config" (OuterVolumeSpecName: "config") pod "24cd3241-a45f-4261-b674-eba5b6ff7b41" (UID: "24cd3241-a45f-4261-b674-eba5b6ff7b41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.516477 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24cd3241-a45f-4261-b674-eba5b6ff7b41" (UID: "24cd3241-a45f-4261-b674-eba5b6ff7b41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.520983 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cd3241-a45f-4261-b674-eba5b6ff7b41-kube-api-access-tcd47" (OuterVolumeSpecName: "kube-api-access-tcd47") pod "24cd3241-a45f-4261-b674-eba5b6ff7b41" (UID: "24cd3241-a45f-4261-b674-eba5b6ff7b41"). InnerVolumeSpecName "kube-api-access-tcd47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.530107 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-f8x92" event={"ID":"24cd3241-a45f-4261-b674-eba5b6ff7b41","Type":"ContainerDied","Data":"ada9664f31c33431f691be7f0ba3a47e10b45ea5365cbbcab0fd05becf99a51f"} Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.530173 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-5m2gz" event={"ID":"4eb70032-cdd8-4ab3-b927-e839d4807e7b","Type":"ContainerDied","Data":"bff5790f8ca474a4cc7dbd571da5f0322981c481cd775f292ad05620c15f28ef"} Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.571738 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-5m2gz"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.584129 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-5m2gz"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.619414 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.619458 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcd47\" (UniqueName: \"kubernetes.io/projected/24cd3241-a45f-4261-b674-eba5b6ff7b41-kube-api-access-tcd47\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.619473 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cd3241-a45f-4261-b674-eba5b6ff7b41-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.685001 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rm6fv"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.815954 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-dxl4s"] Jan 29 16:54:44 crc kubenswrapper[4746]: W0129 16:54:44.822844 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod990c722e_2e75_4a70_9825_0a17324ecac6.slice/crio-a94da6e122c122a24de608d65a50ce282cf586454f14506ae956385bcc8f6c3e WatchSource:0}: Error finding container a94da6e122c122a24de608d65a50ce282cf586454f14506ae956385bcc8f6c3e: Status 404 returned error can't find the container with id a94da6e122c122a24de608d65a50ce282cf586454f14506ae956385bcc8f6c3e Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.848070 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-f8x92"] Jan 29 16:54:44 crc kubenswrapper[4746]: I0129 16:54:44.855476 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-f8x92"] Jan 29 16:54:45 crc kubenswrapper[4746]: E0129 16:54:45.004827 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24cd3241_a45f_4261_b674_eba5b6ff7b41.slice/crio-ada9664f31c33431f691be7f0ba3a47e10b45ea5365cbbcab0fd05becf99a51f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24cd3241_a45f_4261_b674_eba5b6ff7b41.slice\": RecentStats: unable to find data in memory cache]" Jan 29 16:54:45 crc kubenswrapper[4746]: I0129 16:54:45.010565 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-cjtpx"] Jan 29 16:54:45 crc kubenswrapper[4746]: W0129 16:54:45.011167 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a743f8_e233_41b7_bd9b_ea84be94cf13.slice/crio-e78756a529e75f19949e2b900c7eccea210e4037a1a56f40fe58ed1f15e558b0 WatchSource:0}: Error finding container e78756a529e75f19949e2b900c7eccea210e4037a1a56f40fe58ed1f15e558b0: Status 404 returned error can't find the container with id e78756a529e75f19949e2b900c7eccea210e4037a1a56f40fe58ed1f15e558b0 Jan 29 16:54:45 crc kubenswrapper[4746]: I0129 16:54:45.464333 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" event={"ID":"990c722e-2e75-4a70-9825-0a17324ecac6","Type":"ContainerStarted","Data":"a94da6e122c122a24de608d65a50ce282cf586454f14506ae956385bcc8f6c3e"} Jan 29 16:54:45 crc kubenswrapper[4746]: I0129 16:54:45.465293 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" event={"ID":"b4a743f8-e233-41b7-bd9b-ea84be94cf13","Type":"ContainerStarted","Data":"e78756a529e75f19949e2b900c7eccea210e4037a1a56f40fe58ed1f15e558b0"} Jan 29 16:54:45 crc kubenswrapper[4746]: I0129 16:54:45.466767 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rm6fv" event={"ID":"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac","Type":"ContainerStarted","Data":"a6036d8b84305af51faed49df55cdf4b3f55a3a87a4f810532cd0208309771f4"} Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.172612 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.313671 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.315653 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.322610 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.322610 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.322965 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.323208 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fh75l" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.338003 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449214 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449252 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449279 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5mv\" (UniqueName: \"kubernetes.io/projected/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-kube-api-access-wb5mv\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449299 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449322 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449351 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-scripts\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.449404 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-config\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.462508 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24cd3241-a45f-4261-b674-eba5b6ff7b41" path="/var/lib/kubelet/pods/24cd3241-a45f-4261-b674-eba5b6ff7b41/volumes" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.462943 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb70032-cdd8-4ab3-b927-e839d4807e7b" path="/var/lib/kubelet/pods/4eb70032-cdd8-4ab3-b927-e839d4807e7b/volumes" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.479107 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerID="441a28b421daf41c66d9a8e6ce5d4cc7b564c9ec49da6a899f9ddcefb14387e6" exitCode=0 Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.479153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" event={"ID":"b4a743f8-e233-41b7-bd9b-ea84be94cf13","Type":"ContainerDied","Data":"441a28b421daf41c66d9a8e6ce5d4cc7b564c9ec49da6a899f9ddcefb14387e6"} Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.482626 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"717a3fe2-fd76-47c2-b7f2-859dd5186f9c","Type":"ContainerDied","Data":"ae5a4edf6b68a4c05732cca45dbe163b03db7a46e160be1412e89340c7ef3b1d"} Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.482501 4746 generic.go:334] "Generic (PLEG): container finished" podID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerID="ae5a4edf6b68a4c05732cca45dbe163b03db7a46e160be1412e89340c7ef3b1d" exitCode=0 Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.488786 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rm6fv" event={"ID":"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac","Type":"ContainerStarted","Data":"bd785516068c374d3a1fab30f0a344202849aa7b8520ee5b2eebfb62b9ebbc3e"} Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.492538 4746 generic.go:334] "Generic (PLEG): container finished" podID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerID="283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c" exitCode=0 Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.492637 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f5617cc-a91a-4eb7-83d9-25f01bcb890c","Type":"ContainerDied","Data":"283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c"} Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.502623 4746 generic.go:334] "Generic (PLEG): container finished" podID="990c722e-2e75-4a70-9825-0a17324ecac6" containerID="4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf" exitCode=0 Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.502666 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" event={"ID":"990c722e-2e75-4a70-9825-0a17324ecac6","Type":"ContainerDied","Data":"4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf"} Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.550457 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-config\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551576 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551634 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5mv\" (UniqueName: \"kubernetes.io/projected/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-kube-api-access-wb5mv\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551660 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551699 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-scripts\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.551418 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-config\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.550727 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rm6fv" podStartSLOduration=3.5506674499999997 podStartE2EDuration="3.55066745s" podCreationTimestamp="2026-01-29 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:54:46.547139226 +0000 UTC m=+1208.947723880" watchObservedRunningTime="2026-01-29 16:54:46.55066745 +0000 UTC m=+1208.951252094" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.555891 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-scripts\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.557420 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.557720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.561498 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.563882 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.597220 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5mv\" (UniqueName: \"kubernetes.io/projected/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-kube-api-access-wb5mv\") pod \"ovn-northd-0\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " pod="openstack/ovn-northd-0" Jan 29 16:54:46 crc kubenswrapper[4746]: I0129 16:54:46.653110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.131757 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.514359 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"717a3fe2-fd76-47c2-b7f2-859dd5186f9c","Type":"ContainerStarted","Data":"0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c"} Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.516957 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f5617cc-a91a-4eb7-83d9-25f01bcb890c","Type":"ContainerStarted","Data":"fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3"} Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.520336 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cdeb76e4-0143-44ad-935d-eb486d6fa9dc","Type":"ContainerStarted","Data":"dab3202467864c8b353aace393406de99fdb3c04d92135b6af1ed8c3b732dacc"} Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.523593 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" event={"ID":"990c722e-2e75-4a70-9825-0a17324ecac6","Type":"ContainerStarted","Data":"81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf"} Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.523864 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.530711 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" event={"ID":"b4a743f8-e233-41b7-bd9b-ea84be94cf13","Type":"ContainerStarted","Data":"8cb5be25a6fd4928a6bc90ec7baf6c982541351c23de1fc4bee783e5feb98299"} Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.530838 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.533972 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b931fc5d-d5c3-429f-9c40-073a56aed3ba","Type":"ContainerStarted","Data":"97ce90f5b14d69f5966c8a456653fa79fce41aed308c4ece923536d92ee0a358"} Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.534662 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.548226 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371988.30657 podStartE2EDuration="48.548205829s" podCreationTimestamp="2026-01-29 16:53:59 +0000 UTC" firstStartedPulling="2026-01-29 16:54:01.680008849 +0000 UTC m=+1164.080593493" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:54:47.53775965 +0000 UTC m=+1209.938344294" watchObservedRunningTime="2026-01-29 16:54:47.548205829 +0000 UTC m=+1209.948790483" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.559318 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" podStartSLOduration=2.781731583 podStartE2EDuration="3.559302095s" podCreationTimestamp="2026-01-29 16:54:44 +0000 UTC" firstStartedPulling="2026-01-29 16:54:45.013762332 +0000 UTC m=+1207.414346976" lastFinishedPulling="2026-01-29 16:54:45.791332844 +0000 UTC m=+1208.191917488" observedRunningTime="2026-01-29 16:54:47.557901828 +0000 UTC m=+1209.958486472" watchObservedRunningTime="2026-01-29 16:54:47.559302095 +0000 UTC m=+1209.959886739" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.585100 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.6216003089999997 podStartE2EDuration="46.585080963s" podCreationTimestamp="2026-01-29 16:54:01 +0000 UTC" firstStartedPulling="2026-01-29 16:54:02.980850816 +0000 UTC m=+1165.381435460" lastFinishedPulling="2026-01-29 16:54:46.94433147 +0000 UTC m=+1209.344916114" observedRunningTime="2026-01-29 16:54:47.577741258 +0000 UTC m=+1209.978325912" watchObservedRunningTime="2026-01-29 16:54:47.585080963 +0000 UTC m=+1209.985665607" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.605410 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.423183989 podStartE2EDuration="47.605389674s" podCreationTimestamp="2026-01-29 16:54:00 +0000 UTC" firstStartedPulling="2026-01-29 16:54:02.926809614 +0000 UTC m=+1165.327394258" lastFinishedPulling="2026-01-29 16:54:42.109015309 +0000 UTC m=+1204.509599943" observedRunningTime="2026-01-29 16:54:47.598971413 +0000 UTC m=+1209.999556077" watchObservedRunningTime="2026-01-29 16:54:47.605389674 +0000 UTC m=+1210.005974318" Jan 29 16:54:47 crc kubenswrapper[4746]: I0129 16:54:47.619487 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" podStartSLOduration=3.907203181 podStartE2EDuration="4.61946502s" podCreationTimestamp="2026-01-29 16:54:43 +0000 UTC" firstStartedPulling="2026-01-29 16:54:44.834363997 +0000 UTC m=+1207.234948641" lastFinishedPulling="2026-01-29 16:54:45.546625836 +0000 UTC m=+1207.947210480" observedRunningTime="2026-01-29 16:54:47.614738974 +0000 UTC m=+1210.015323618" watchObservedRunningTime="2026-01-29 16:54:47.61946502 +0000 UTC m=+1210.020049664" Jan 29 16:54:49 crc kubenswrapper[4746]: I0129 16:54:49.549496 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71c96526-7c37-42c2-896e-b551dd6ed5b8","Type":"ContainerStarted","Data":"f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e"} Jan 29 16:54:50 crc kubenswrapper[4746]: I0129 16:54:50.558362 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cdeb76e4-0143-44ad-935d-eb486d6fa9dc","Type":"ContainerStarted","Data":"251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34"} Jan 29 16:54:50 crc kubenswrapper[4746]: I0129 16:54:50.817238 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 16:54:50 crc kubenswrapper[4746]: I0129 16:54:50.817355 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 16:54:52 crc kubenswrapper[4746]: I0129 16:54:52.169066 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:52 crc kubenswrapper[4746]: I0129 16:54:52.169709 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:52 crc kubenswrapper[4746]: I0129 16:54:52.306687 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 16:54:52 crc kubenswrapper[4746]: I0129 16:54:52.576485 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cdeb76e4-0143-44ad-935d-eb486d6fa9dc","Type":"ContainerStarted","Data":"8d41c00ff4e878b0ca19eebfb37df14fb06c2ce7bba3e45e02c666faf55cdc88"} Jan 29 16:54:52 crc kubenswrapper[4746]: I0129 16:54:52.603032 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.088653186 podStartE2EDuration="6.603015446s" podCreationTimestamp="2026-01-29 16:54:46 +0000 UTC" firstStartedPulling="2026-01-29 16:54:47.132973853 +0000 UTC m=+1209.533558497" lastFinishedPulling="2026-01-29 16:54:49.647336113 +0000 UTC m=+1212.047920757" observedRunningTime="2026-01-29 16:54:52.599639936 +0000 UTC m=+1215.000224580" watchObservedRunningTime="2026-01-29 16:54:52.603015446 +0000 UTC m=+1215.003600090" Jan 29 16:54:52 crc kubenswrapper[4746]: E0129 16:54:52.942523 4746 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.22:38166->38.102.83.22:36027: write tcp 38.102.83.22:38166->38.102.83.22:36027: write: broken pipe Jan 29 16:54:53 crc kubenswrapper[4746]: I0129 16:54:53.582803 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.155379 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.245034 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-cjtpx"] Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.245611 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="dnsmasq-dns" containerID="cri-o://8cb5be25a6fd4928a6bc90ec7baf6c982541351c23de1fc4bee783e5feb98299" gracePeriod=10 Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.252236 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.317675 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-wmcfl"] Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.321079 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.338904 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-wmcfl"] Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.383630 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.383693 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.383742 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.383822 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgwdb\" (UniqueName: \"kubernetes.io/projected/f09729b5-cce8-4671-b19d-e8fb14ad533c-kube-api-access-dgwdb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.383884 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-config\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.488356 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.488413 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.488472 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.488568 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgwdb\" (UniqueName: \"kubernetes.io/projected/f09729b5-cce8-4671-b19d-e8fb14ad533c-kube-api-access-dgwdb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.488622 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-config\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.490472 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.491785 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-config\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.494580 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.494643 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.507031 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.538229 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgwdb\" (UniqueName: \"kubernetes.io/projected/f09729b5-cce8-4671-b19d-e8fb14ad533c-kube-api-access-dgwdb\") pod \"dnsmasq-dns-6cb545bd4c-wmcfl\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.590921 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerID="8cb5be25a6fd4928a6bc90ec7baf6c982541351c23de1fc4bee783e5feb98299" exitCode=0 Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.591296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" event={"ID":"b4a743f8-e233-41b7-bd9b-ea84be94cf13","Type":"ContainerDied","Data":"8cb5be25a6fd4928a6bc90ec7baf6c982541351c23de1fc4bee783e5feb98299"} Jan 29 16:54:54 crc kubenswrapper[4746]: I0129 16:54:54.650017 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.089973 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-wmcfl"] Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.328776 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.379500 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:54:55 crc kubenswrapper[4746]: E0129 16:54:55.379925 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="dnsmasq-dns" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.379946 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="dnsmasq-dns" Jan 29 16:54:55 crc kubenswrapper[4746]: E0129 16:54:55.379987 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="init" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.379998 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="init" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.383314 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" containerName="dnsmasq-dns" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.393596 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.399573 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.407611 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.407697 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.407857 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.408027 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-config\") pod \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.408048 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mw996" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.408088 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-dns-svc\") pod \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.408143 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-nb\") pod \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.408208 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-sb\") pod \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.408239 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfvqg\" (UniqueName: \"kubernetes.io/projected/b4a743f8-e233-41b7-bd9b-ea84be94cf13-kube-api-access-rfvqg\") pod \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\" (UID: \"b4a743f8-e233-41b7-bd9b-ea84be94cf13\") " Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.413491 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a743f8-e233-41b7-bd9b-ea84be94cf13-kube-api-access-rfvqg" (OuterVolumeSpecName: "kube-api-access-rfvqg") pod "b4a743f8-e233-41b7-bd9b-ea84be94cf13" (UID: "b4a743f8-e233-41b7-bd9b-ea84be94cf13"). InnerVolumeSpecName "kube-api-access-rfvqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.457830 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4a743f8-e233-41b7-bd9b-ea84be94cf13" (UID: "b4a743f8-e233-41b7-bd9b-ea84be94cf13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.467375 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4a743f8-e233-41b7-bd9b-ea84be94cf13" (UID: "b4a743f8-e233-41b7-bd9b-ea84be94cf13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.479059 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-config" (OuterVolumeSpecName: "config") pod "b4a743f8-e233-41b7-bd9b-ea84be94cf13" (UID: "b4a743f8-e233-41b7-bd9b-ea84be94cf13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.494141 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4a743f8-e233-41b7-bd9b-ea84be94cf13" (UID: "b4a743f8-e233-41b7-bd9b-ea84be94cf13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510233 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-cache\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510469 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-lock\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510547 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-656bn\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-kube-api-access-656bn\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510767 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510883 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.510986 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.511009 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.511018 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.511027 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4a743f8-e233-41b7-bd9b-ea84be94cf13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.511037 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfvqg\" (UniqueName: \"kubernetes.io/projected/b4a743f8-e233-41b7-bd9b-ea84be94cf13-kube-api-access-rfvqg\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.604900 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" event={"ID":"b4a743f8-e233-41b7-bd9b-ea84be94cf13","Type":"ContainerDied","Data":"e78756a529e75f19949e2b900c7eccea210e4037a1a56f40fe58ed1f15e558b0"} Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.604951 4746 scope.go:117] "RemoveContainer" containerID="8cb5be25a6fd4928a6bc90ec7baf6c982541351c23de1fc4bee783e5feb98299" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.605507 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-cjtpx" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.606493 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" event={"ID":"f09729b5-cce8-4671-b19d-e8fb14ad533c","Type":"ContainerStarted","Data":"edea334f335f6124118177c4d5d75bd78483607fab374edae370a9ffd740274e"} Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.606513 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" event={"ID":"f09729b5-cce8-4671-b19d-e8fb14ad533c","Type":"ContainerStarted","Data":"dcf3c0deda34111f192dfa67ed2a547ecbedb1bfabb2da5155f027277a706b5a"} Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.612557 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-cache\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.612592 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-lock\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.612618 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-656bn\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-kube-api-access-656bn\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.612663 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.612729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.612772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.613035 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-cache\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: E0129 16:54:55.613957 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:54:55 crc kubenswrapper[4746]: E0129 16:54:55.622384 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:54:55 crc kubenswrapper[4746]: E0129 16:54:55.622527 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift podName:4434dba0-90da-4ac0-8cd4-5c2babfdb2eb nodeName:}" failed. No retries permitted until 2026-01-29 16:54:56.122493521 +0000 UTC m=+1218.523078175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift") pod "swift-storage-0" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb") : configmap "swift-ring-files" not found Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.614076 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.633417 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.634280 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-lock\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.636505 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-656bn\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-kube-api-access-656bn\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.642480 4746 scope.go:117] "RemoveContainer" containerID="441a28b421daf41c66d9a8e6ce5d4cc7b564c9ec49da6a899f9ddcefb14387e6" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.656800 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-cjtpx"] Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.657327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:55 crc kubenswrapper[4746]: I0129 16:54:55.664308 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-cjtpx"] Jan 29 16:54:56 crc kubenswrapper[4746]: I0129 16:54:56.222576 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:56 crc kubenswrapper[4746]: E0129 16:54:56.223090 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:54:56 crc kubenswrapper[4746]: E0129 16:54:56.223102 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:54:56 crc kubenswrapper[4746]: E0129 16:54:56.223150 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift podName:4434dba0-90da-4ac0-8cd4-5c2babfdb2eb nodeName:}" failed. No retries permitted until 2026-01-29 16:54:57.223131203 +0000 UTC m=+1219.623715847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift") pod "swift-storage-0" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb") : configmap "swift-ring-files" not found Jan 29 16:54:56 crc kubenswrapper[4746]: I0129 16:54:56.458106 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a743f8-e233-41b7-bd9b-ea84be94cf13" path="/var/lib/kubelet/pods/b4a743f8-e233-41b7-bd9b-ea84be94cf13/volumes" Jan 29 16:54:56 crc kubenswrapper[4746]: I0129 16:54:56.616268 4746 generic.go:334] "Generic (PLEG): container finished" podID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerID="edea334f335f6124118177c4d5d75bd78483607fab374edae370a9ffd740274e" exitCode=0 Jan 29 16:54:56 crc kubenswrapper[4746]: I0129 16:54:56.616327 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" event={"ID":"f09729b5-cce8-4671-b19d-e8fb14ad533c","Type":"ContainerDied","Data":"edea334f335f6124118177c4d5d75bd78483607fab374edae370a9ffd740274e"} Jan 29 16:54:56 crc kubenswrapper[4746]: I0129 16:54:56.825021 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 16:54:56 crc kubenswrapper[4746]: I0129 16:54:56.922500 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="galera" probeResult="failure" output=< Jan 29 16:54:56 crc kubenswrapper[4746]: wsrep_local_state_comment (Joined) differs from Synced Jan 29 16:54:56 crc kubenswrapper[4746]: > Jan 29 16:54:57 crc kubenswrapper[4746]: I0129 16:54:57.238341 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:57 crc kubenswrapper[4746]: E0129 16:54:57.238540 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:54:57 crc kubenswrapper[4746]: E0129 16:54:57.238564 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:54:57 crc kubenswrapper[4746]: E0129 16:54:57.238633 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift podName:4434dba0-90da-4ac0-8cd4-5c2babfdb2eb nodeName:}" failed. No retries permitted until 2026-01-29 16:54:59.238617081 +0000 UTC m=+1221.639201715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift") pod "swift-storage-0" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb") : configmap "swift-ring-files" not found Jan 29 16:54:58 crc kubenswrapper[4746]: I0129 16:54:58.294793 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:58 crc kubenswrapper[4746]: I0129 16:54:58.378749 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 16:54:58 crc kubenswrapper[4746]: I0129 16:54:58.634569 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" event={"ID":"f09729b5-cce8-4671-b19d-e8fb14ad533c","Type":"ContainerStarted","Data":"8e139f6609870c3d06003df1422423fde0513fe9d6cdedd81297a493b0a2a9e1"} Jan 29 16:54:58 crc kubenswrapper[4746]: I0129 16:54:58.656495 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" podStartSLOduration=4.656476262 podStartE2EDuration="4.656476262s" podCreationTimestamp="2026-01-29 16:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:54:58.654023767 +0000 UTC m=+1221.054608411" watchObservedRunningTime="2026-01-29 16:54:58.656476262 +0000 UTC m=+1221.057060906" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.273924 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:54:59 crc kubenswrapper[4746]: E0129 16:54:59.274258 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:54:59 crc kubenswrapper[4746]: E0129 16:54:59.274528 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:54:59 crc kubenswrapper[4746]: E0129 16:54:59.274667 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift podName:4434dba0-90da-4ac0-8cd4-5c2babfdb2eb nodeName:}" failed. No retries permitted until 2026-01-29 16:55:03.274647052 +0000 UTC m=+1225.675231696 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift") pod "swift-storage-0" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb") : configmap "swift-ring-files" not found Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.290147 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-4wf6f"] Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.291575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.293628 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.294216 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.294406 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.314678 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4wf6f"] Jan 29 16:54:59 crc kubenswrapper[4746]: E0129 16:54:59.315379 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-qbkbd ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-qbkbd ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-4wf6f" podUID="c943030b-61ba-498a-8763-9d8f51b90792" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.329850 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4wf6f"] Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.339370 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-p9k8d"] Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.340742 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.349411 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-p9k8d"] Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkbd\" (UniqueName: \"kubernetes.io/projected/c943030b-61ba-498a-8763-9d8f51b90792-kube-api-access-qbkbd\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375080 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-ring-data-devices\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375108 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-scripts\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375212 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-scripts\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375240 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-dispersionconf\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375263 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2ab64d41-3d73-42d4-abfc-7c65b9c54970-etc-swift\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375542 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-ring-data-devices\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375618 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-swiftconf\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375751 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c943030b-61ba-498a-8763-9d8f51b90792-etc-swift\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375790 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-dispersionconf\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375813 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-swiftconf\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375888 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-combined-ca-bundle\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375920 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-combined-ca-bundle\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.375972 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snqh6\" (UniqueName: \"kubernetes.io/projected/2ab64d41-3d73-42d4-abfc-7c65b9c54970-kube-api-access-snqh6\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476734 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-ring-data-devices\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476777 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-scripts\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476824 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-scripts\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-dispersionconf\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476861 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2ab64d41-3d73-42d4-abfc-7c65b9c54970-etc-swift\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-ring-data-devices\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.476936 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-swiftconf\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477004 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c943030b-61ba-498a-8763-9d8f51b90792-etc-swift\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477021 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-dispersionconf\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477037 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-swiftconf\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477063 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-combined-ca-bundle\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477085 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-combined-ca-bundle\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477107 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snqh6\" (UniqueName: \"kubernetes.io/projected/2ab64d41-3d73-42d4-abfc-7c65b9c54970-kube-api-access-snqh6\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.477147 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbkbd\" (UniqueName: \"kubernetes.io/projected/c943030b-61ba-498a-8763-9d8f51b90792-kube-api-access-qbkbd\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.478442 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-ring-data-devices\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.478782 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2ab64d41-3d73-42d4-abfc-7c65b9c54970-etc-swift\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.479810 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-scripts\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.480423 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-scripts\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.480593 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c943030b-61ba-498a-8763-9d8f51b90792-etc-swift\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.481593 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-ring-data-devices\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.482842 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-swiftconf\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.483668 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-dispersionconf\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.484063 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-dispersionconf\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.484639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-combined-ca-bundle\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.488772 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-swiftconf\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.489906 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-combined-ca-bundle\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.495628 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbkbd\" (UniqueName: \"kubernetes.io/projected/c943030b-61ba-498a-8763-9d8f51b90792-kube-api-access-qbkbd\") pod \"swift-ring-rebalance-4wf6f\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.495938 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snqh6\" (UniqueName: \"kubernetes.io/projected/2ab64d41-3d73-42d4-abfc-7c65b9c54970-kube-api-access-snqh6\") pod \"swift-ring-rebalance-p9k8d\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.640941 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.641081 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.653776 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.663291 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c943030b-61ba-498a-8763-9d8f51b90792-etc-swift\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679482 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbkbd\" (UniqueName: \"kubernetes.io/projected/c943030b-61ba-498a-8763-9d8f51b90792-kube-api-access-qbkbd\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679506 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-combined-ca-bundle\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679529 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-ring-data-devices\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679581 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-swiftconf\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679627 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-dispersionconf\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.679680 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-scripts\") pod \"c943030b-61ba-498a-8763-9d8f51b90792\" (UID: \"c943030b-61ba-498a-8763-9d8f51b90792\") " Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.680672 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c943030b-61ba-498a-8763-9d8f51b90792-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.681103 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-scripts" (OuterVolumeSpecName: "scripts") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.682057 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.686306 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.686734 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.687035 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.690125 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c943030b-61ba-498a-8763-9d8f51b90792-kube-api-access-qbkbd" (OuterVolumeSpecName: "kube-api-access-qbkbd") pod "c943030b-61ba-498a-8763-9d8f51b90792" (UID: "c943030b-61ba-498a-8763-9d8f51b90792"). InnerVolumeSpecName "kube-api-access-qbkbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781603 4746 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c943030b-61ba-498a-8763-9d8f51b90792-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781638 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbkbd\" (UniqueName: \"kubernetes.io/projected/c943030b-61ba-498a-8763-9d8f51b90792-kube-api-access-qbkbd\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781649 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781658 4746 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781666 4746 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781710 4746 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c943030b-61ba-498a-8763-9d8f51b90792-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:59 crc kubenswrapper[4746]: I0129 16:54:59.781721 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c943030b-61ba-498a-8763-9d8f51b90792-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.087538 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-p9k8d"] Jan 29 16:55:00 crc kubenswrapper[4746]: W0129 16:55:00.089541 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ab64d41_3d73_42d4_abfc_7c65b9c54970.slice/crio-c05df8cda690f07c701f9b8fc08d1e029bbab99d29657a29c630803111876a40 WatchSource:0}: Error finding container c05df8cda690f07c701f9b8fc08d1e029bbab99d29657a29c630803111876a40: Status 404 returned error can't find the container with id c05df8cda690f07c701f9b8fc08d1e029bbab99d29657a29c630803111876a40 Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.650664 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p9k8d" event={"ID":"2ab64d41-3d73-42d4-abfc-7c65b9c54970","Type":"ContainerStarted","Data":"c05df8cda690f07c701f9b8fc08d1e029bbab99d29657a29c630803111876a40"} Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.650721 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-4wf6f" Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.687590 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-4wf6f"] Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.693408 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-4wf6f"] Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.865631 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-ddwh5"] Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.868372 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.871313 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.876999 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ddwh5"] Jan 29 16:55:00 crc kubenswrapper[4746]: I0129 16:55:00.928908 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.006454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656137fe-0a34-408e-bb53-7817651630fa-operator-scripts\") pod \"root-account-create-update-ddwh5\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.006519 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blgmp\" (UniqueName: \"kubernetes.io/projected/656137fe-0a34-408e-bb53-7817651630fa-kube-api-access-blgmp\") pod \"root-account-create-update-ddwh5\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.108196 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656137fe-0a34-408e-bb53-7817651630fa-operator-scripts\") pod \"root-account-create-update-ddwh5\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.108265 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blgmp\" (UniqueName: \"kubernetes.io/projected/656137fe-0a34-408e-bb53-7817651630fa-kube-api-access-blgmp\") pod \"root-account-create-update-ddwh5\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.110165 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656137fe-0a34-408e-bb53-7817651630fa-operator-scripts\") pod \"root-account-create-update-ddwh5\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.129109 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blgmp\" (UniqueName: \"kubernetes.io/projected/656137fe-0a34-408e-bb53-7817651630fa-kube-api-access-blgmp\") pod \"root-account-create-update-ddwh5\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.202063 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.695411 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-ddwh5"] Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.935618 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lh2nd"] Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.937030 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:01 crc kubenswrapper[4746]: I0129 16:55:01.943538 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lh2nd"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.026009 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d95d-account-create-update-7v4ph"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.027202 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.028151 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhmfk\" (UniqueName: \"kubernetes.io/projected/d4af9fe5-b4be-4952-97ec-60c8d00703e9-kube-api-access-vhmfk\") pod \"keystone-db-create-lh2nd\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.028290 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4af9fe5-b4be-4952-97ec-60c8d00703e9-operator-scripts\") pod \"keystone-db-create-lh2nd\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.030718 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.033237 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d95d-account-create-update-7v4ph"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.136769 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhmfk\" (UniqueName: \"kubernetes.io/projected/d4af9fe5-b4be-4952-97ec-60c8d00703e9-kube-api-access-vhmfk\") pod \"keystone-db-create-lh2nd\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.136878 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4af9fe5-b4be-4952-97ec-60c8d00703e9-operator-scripts\") pod \"keystone-db-create-lh2nd\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.136924 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp2sg\" (UniqueName: \"kubernetes.io/projected/ac094691-abb2-4295-bbe9-13b698b6b315-kube-api-access-jp2sg\") pod \"keystone-d95d-account-create-update-7v4ph\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.136957 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac094691-abb2-4295-bbe9-13b698b6b315-operator-scripts\") pod \"keystone-d95d-account-create-update-7v4ph\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.137914 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4af9fe5-b4be-4952-97ec-60c8d00703e9-operator-scripts\") pod \"keystone-db-create-lh2nd\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.165805 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhmfk\" (UniqueName: \"kubernetes.io/projected/d4af9fe5-b4be-4952-97ec-60c8d00703e9-kube-api-access-vhmfk\") pod \"keystone-db-create-lh2nd\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.212813 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hpls4"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.213872 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.240096 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp2sg\" (UniqueName: \"kubernetes.io/projected/ac094691-abb2-4295-bbe9-13b698b6b315-kube-api-access-jp2sg\") pod \"keystone-d95d-account-create-update-7v4ph\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.240159 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac094691-abb2-4295-bbe9-13b698b6b315-operator-scripts\") pod \"keystone-d95d-account-create-update-7v4ph\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.241134 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac094691-abb2-4295-bbe9-13b698b6b315-operator-scripts\") pod \"keystone-d95d-account-create-update-7v4ph\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.259716 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.259999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp2sg\" (UniqueName: \"kubernetes.io/projected/ac094691-abb2-4295-bbe9-13b698b6b315-kube-api-access-jp2sg\") pod \"keystone-d95d-account-create-update-7v4ph\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.266261 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hpls4"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.287789 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0884-account-create-update-m66xk"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.289020 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.290720 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.295555 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0884-account-create-update-m66xk"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.341745 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92e2416-cc6d-4276-a96b-446a90bb18c0-operator-scripts\") pod \"placement-db-create-hpls4\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.341842 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t87zj\" (UniqueName: \"kubernetes.io/projected/f92e2416-cc6d-4276-a96b-446a90bb18c0-kube-api-access-t87zj\") pod \"placement-db-create-hpls4\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.345260 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.443531 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9db3bc-78e5-462d-80cb-8022f80959ab-operator-scripts\") pod \"placement-0884-account-create-update-m66xk\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.443597 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92e2416-cc6d-4276-a96b-446a90bb18c0-operator-scripts\") pod \"placement-db-create-hpls4\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.443950 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t87zj\" (UniqueName: \"kubernetes.io/projected/f92e2416-cc6d-4276-a96b-446a90bb18c0-kube-api-access-t87zj\") pod \"placement-db-create-hpls4\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.444414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92e2416-cc6d-4276-a96b-446a90bb18c0-operator-scripts\") pod \"placement-db-create-hpls4\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.445966 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkbh\" (UniqueName: \"kubernetes.io/projected/cb9db3bc-78e5-462d-80cb-8022f80959ab-kube-api-access-rpkbh\") pod \"placement-0884-account-create-update-m66xk\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.472244 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t87zj\" (UniqueName: \"kubernetes.io/projected/f92e2416-cc6d-4276-a96b-446a90bb18c0-kube-api-access-t87zj\") pod \"placement-db-create-hpls4\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.485401 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c943030b-61ba-498a-8763-9d8f51b90792" path="/var/lib/kubelet/pods/c943030b-61ba-498a-8763-9d8f51b90792/volumes" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.486062 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vz8vc"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.487492 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vz8vc"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.487610 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.547612 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpkbh\" (UniqueName: \"kubernetes.io/projected/cb9db3bc-78e5-462d-80cb-8022f80959ab-kube-api-access-rpkbh\") pod \"placement-0884-account-create-update-m66xk\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.547730 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9db3bc-78e5-462d-80cb-8022f80959ab-operator-scripts\") pod \"placement-0884-account-create-update-m66xk\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.549005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9db3bc-78e5-462d-80cb-8022f80959ab-operator-scripts\") pod \"placement-0884-account-create-update-m66xk\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.551699 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hpls4" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.577276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpkbh\" (UniqueName: \"kubernetes.io/projected/cb9db3bc-78e5-462d-80cb-8022f80959ab-kube-api-access-rpkbh\") pod \"placement-0884-account-create-update-m66xk\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.584411 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4017-account-create-update-x8q6t"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.585763 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.591170 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.594440 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4017-account-create-update-x8q6t"] Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.610543 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.649637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x746\" (UniqueName: \"kubernetes.io/projected/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-kube-api-access-2x746\") pod \"glance-db-create-vz8vc\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.649729 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-operator-scripts\") pod \"glance-db-create-vz8vc\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.751155 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-operator-scripts\") pod \"glance-db-create-vz8vc\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.751383 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvsz\" (UniqueName: \"kubernetes.io/projected/5d60a101-b5c2-4280-8d06-c7556eaf1535-kube-api-access-9nvsz\") pod \"glance-4017-account-create-update-x8q6t\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.751505 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d60a101-b5c2-4280-8d06-c7556eaf1535-operator-scripts\") pod \"glance-4017-account-create-update-x8q6t\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.751537 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x746\" (UniqueName: \"kubernetes.io/projected/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-kube-api-access-2x746\") pod \"glance-db-create-vz8vc\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.752043 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-operator-scripts\") pod \"glance-db-create-vz8vc\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.767733 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x746\" (UniqueName: \"kubernetes.io/projected/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-kube-api-access-2x746\") pod \"glance-db-create-vz8vc\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.828338 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.853204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d60a101-b5c2-4280-8d06-c7556eaf1535-operator-scripts\") pod \"glance-4017-account-create-update-x8q6t\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.853372 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvsz\" (UniqueName: \"kubernetes.io/projected/5d60a101-b5c2-4280-8d06-c7556eaf1535-kube-api-access-9nvsz\") pod \"glance-4017-account-create-update-x8q6t\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.854023 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d60a101-b5c2-4280-8d06-c7556eaf1535-operator-scripts\") pod \"glance-4017-account-create-update-x8q6t\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.869867 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvsz\" (UniqueName: \"kubernetes.io/projected/5d60a101-b5c2-4280-8d06-c7556eaf1535-kube-api-access-9nvsz\") pod \"glance-4017-account-create-update-x8q6t\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:02 crc kubenswrapper[4746]: I0129 16:55:02.921045 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:03 crc kubenswrapper[4746]: I0129 16:55:03.361030 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:55:03 crc kubenswrapper[4746]: E0129 16:55:03.361230 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:55:03 crc kubenswrapper[4746]: E0129 16:55:03.361530 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:55:03 crc kubenswrapper[4746]: E0129 16:55:03.361752 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift podName:4434dba0-90da-4ac0-8cd4-5c2babfdb2eb nodeName:}" failed. No retries permitted until 2026-01-29 16:55:11.361716485 +0000 UTC m=+1233.762301129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift") pod "swift-storage-0" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb") : configmap "swift-ring-files" not found Jan 29 16:55:04 crc kubenswrapper[4746]: I0129 16:55:04.655594 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:55:04 crc kubenswrapper[4746]: I0129 16:55:04.689221 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ddwh5" event={"ID":"656137fe-0a34-408e-bb53-7817651630fa","Type":"ContainerStarted","Data":"e9cbcfc12427a81b6e4b347a716f72f523dcadd22cde17f7da2bf401f66972c7"} Jan 29 16:55:04 crc kubenswrapper[4746]: I0129 16:55:04.689271 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ddwh5" event={"ID":"656137fe-0a34-408e-bb53-7817651630fa","Type":"ContainerStarted","Data":"610fb8e05c4670b6d24aac1b9be6c95f1887849497e056e93dcb1462a435eda2"} Jan 29 16:55:04 crc kubenswrapper[4746]: I0129 16:55:04.724379 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-dxl4s"] Jan 29 16:55:04 crc kubenswrapper[4746]: I0129 16:55:04.731417 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" containerName="dnsmasq-dns" containerID="cri-o://81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf" gracePeriod=10 Jan 29 16:55:04 crc kubenswrapper[4746]: I0129 16:55:04.803163 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d95d-account-create-update-7v4ph"] Jan 29 16:55:04 crc kubenswrapper[4746]: W0129 16:55:04.812343 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac094691_abb2_4295_bbe9_13b698b6b315.slice/crio-76a3b19ff66c5de953bb2625bca74678dc441bb3a1b3cbcc0793d4a2ba95a8c1 WatchSource:0}: Error finding container 76a3b19ff66c5de953bb2625bca74678dc441bb3a1b3cbcc0793d4a2ba95a8c1: Status 404 returned error can't find the container with id 76a3b19ff66c5de953bb2625bca74678dc441bb3a1b3cbcc0793d4a2ba95a8c1 Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.109473 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lh2nd"] Jan 29 16:55:05 crc kubenswrapper[4746]: W0129 16:55:05.117469 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4af9fe5_b4be_4952_97ec_60c8d00703e9.slice/crio-599cf49d0112de8137f63c1eea24c6ad15e726ef0cdd1f152509515beafd95b3 WatchSource:0}: Error finding container 599cf49d0112de8137f63c1eea24c6ad15e726ef0cdd1f152509515beafd95b3: Status 404 returned error can't find the container with id 599cf49d0112de8137f63c1eea24c6ad15e726ef0cdd1f152509515beafd95b3 Jan 29 16:55:05 crc kubenswrapper[4746]: W0129 16:55:05.120495 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d60a101_b5c2_4280_8d06_c7556eaf1535.slice/crio-583e3edb3f37afd34c2a203c7b5124b65b5698862612e102d5258b1e72223870 WatchSource:0}: Error finding container 583e3edb3f37afd34c2a203c7b5124b65b5698862612e102d5258b1e72223870: Status 404 returned error can't find the container with id 583e3edb3f37afd34c2a203c7b5124b65b5698862612e102d5258b1e72223870 Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.120902 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4017-account-create-update-x8q6t"] Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.145367 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vz8vc"] Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.153463 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hpls4"] Jan 29 16:55:05 crc kubenswrapper[4746]: W0129 16:55:05.160607 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92e2416_cc6d_4276_a96b_446a90bb18c0.slice/crio-6f58859585676c0965d5722f070c1e7026b6a40acdbdd5e44af33d4284b312a8 WatchSource:0}: Error finding container 6f58859585676c0965d5722f070c1e7026b6a40acdbdd5e44af33d4284b312a8: Status 404 returned error can't find the container with id 6f58859585676c0965d5722f070c1e7026b6a40acdbdd5e44af33d4284b312a8 Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.194955 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0884-account-create-update-m66xk"] Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.385425 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.516494 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-ovsdbserver-sb\") pod \"990c722e-2e75-4a70-9825-0a17324ecac6\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.516531 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-dns-svc\") pod \"990c722e-2e75-4a70-9825-0a17324ecac6\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.516642 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzdjt\" (UniqueName: \"kubernetes.io/projected/990c722e-2e75-4a70-9825-0a17324ecac6-kube-api-access-pzdjt\") pod \"990c722e-2e75-4a70-9825-0a17324ecac6\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.516741 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-config\") pod \"990c722e-2e75-4a70-9825-0a17324ecac6\" (UID: \"990c722e-2e75-4a70-9825-0a17324ecac6\") " Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.531299 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990c722e-2e75-4a70-9825-0a17324ecac6-kube-api-access-pzdjt" (OuterVolumeSpecName: "kube-api-access-pzdjt") pod "990c722e-2e75-4a70-9825-0a17324ecac6" (UID: "990c722e-2e75-4a70-9825-0a17324ecac6"). InnerVolumeSpecName "kube-api-access-pzdjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.619636 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzdjt\" (UniqueName: \"kubernetes.io/projected/990c722e-2e75-4a70-9825-0a17324ecac6-kube-api-access-pzdjt\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.656942 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-config" (OuterVolumeSpecName: "config") pod "990c722e-2e75-4a70-9825-0a17324ecac6" (UID: "990c722e-2e75-4a70-9825-0a17324ecac6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.684258 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "990c722e-2e75-4a70-9825-0a17324ecac6" (UID: "990c722e-2e75-4a70-9825-0a17324ecac6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.700047 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "990c722e-2e75-4a70-9825-0a17324ecac6" (UID: "990c722e-2e75-4a70-9825-0a17324ecac6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.707427 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p9k8d" event={"ID":"2ab64d41-3d73-42d4-abfc-7c65b9c54970","Type":"ContainerStarted","Data":"f20c1208bb170c0dc12ec84c9358d47475e98d721184240642696df4d5199cc4"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.713795 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vz8vc" event={"ID":"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1","Type":"ContainerStarted","Data":"79e1251fb71ce11a43cf32e7a28779e697303a7079a8d785c6ab9099c472b0a2"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.713872 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vz8vc" event={"ID":"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1","Type":"ContainerStarted","Data":"cbe630a9ea7f0a56c549f8fb7c0c794654138c979ee336552d83e594329c02f2"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.718960 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4017-account-create-update-x8q6t" event={"ID":"5d60a101-b5c2-4280-8d06-c7556eaf1535","Type":"ContainerStarted","Data":"583e3edb3f37afd34c2a203c7b5124b65b5698862612e102d5258b1e72223870"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.721974 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.722011 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.722025 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/990c722e-2e75-4a70-9825-0a17324ecac6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.724219 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hpls4" event={"ID":"f92e2416-cc6d-4276-a96b-446a90bb18c0","Type":"ContainerStarted","Data":"6f58859585676c0965d5722f070c1e7026b6a40acdbdd5e44af33d4284b312a8"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.737446 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-p9k8d" podStartSLOduration=2.5055543030000003 podStartE2EDuration="6.737415457s" podCreationTimestamp="2026-01-29 16:54:59 +0000 UTC" firstStartedPulling="2026-01-29 16:55:00.092098238 +0000 UTC m=+1222.492682882" lastFinishedPulling="2026-01-29 16:55:04.323959392 +0000 UTC m=+1226.724544036" observedRunningTime="2026-01-29 16:55:05.731421787 +0000 UTC m=+1228.132006441" watchObservedRunningTime="2026-01-29 16:55:05.737415457 +0000 UTC m=+1228.138000101" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.738091 4746 generic.go:334] "Generic (PLEG): container finished" podID="990c722e-2e75-4a70-9825-0a17324ecac6" containerID="81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf" exitCode=0 Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.738256 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" event={"ID":"990c722e-2e75-4a70-9825-0a17324ecac6","Type":"ContainerDied","Data":"81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.738339 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" event={"ID":"990c722e-2e75-4a70-9825-0a17324ecac6","Type":"ContainerDied","Data":"a94da6e122c122a24de608d65a50ce282cf586454f14506ae956385bcc8f6c3e"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.738425 4746 scope.go:117] "RemoveContainer" containerID="81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.738640 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-dxl4s" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.748988 4746 generic.go:334] "Generic (PLEG): container finished" podID="656137fe-0a34-408e-bb53-7817651630fa" containerID="e9cbcfc12427a81b6e4b347a716f72f523dcadd22cde17f7da2bf401f66972c7" exitCode=0 Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.749081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ddwh5" event={"ID":"656137fe-0a34-408e-bb53-7817651630fa","Type":"ContainerDied","Data":"e9cbcfc12427a81b6e4b347a716f72f523dcadd22cde17f7da2bf401f66972c7"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.765036 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lh2nd" event={"ID":"d4af9fe5-b4be-4952-97ec-60c8d00703e9","Type":"ContainerStarted","Data":"0c10cc49ecb618eb08c28b96af93caf437f2eab603b005ba55fa890df2e8cb3d"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.765577 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lh2nd" event={"ID":"d4af9fe5-b4be-4952-97ec-60c8d00703e9","Type":"ContainerStarted","Data":"599cf49d0112de8137f63c1eea24c6ad15e726ef0cdd1f152509515beafd95b3"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.767876 4746 generic.go:334] "Generic (PLEG): container finished" podID="ac094691-abb2-4295-bbe9-13b698b6b315" containerID="35670f9a01e378fa8f461a089914897b236fe45d29b761ce22819d6a16d6a248" exitCode=0 Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.768082 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d95d-account-create-update-7v4ph" event={"ID":"ac094691-abb2-4295-bbe9-13b698b6b315","Type":"ContainerDied","Data":"35670f9a01e378fa8f461a089914897b236fe45d29b761ce22819d6a16d6a248"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.768109 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d95d-account-create-update-7v4ph" event={"ID":"ac094691-abb2-4295-bbe9-13b698b6b315","Type":"ContainerStarted","Data":"76a3b19ff66c5de953bb2625bca74678dc441bb3a1b3cbcc0793d4a2ba95a8c1"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.767939 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vz8vc" podStartSLOduration=3.76789937 podStartE2EDuration="3.76789937s" podCreationTimestamp="2026-01-29 16:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:05.752601591 +0000 UTC m=+1228.153186245" watchObservedRunningTime="2026-01-29 16:55:05.76789937 +0000 UTC m=+1228.168484014" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.774160 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0884-account-create-update-m66xk" event={"ID":"cb9db3bc-78e5-462d-80cb-8022f80959ab","Type":"ContainerStarted","Data":"e10bae969f747867594da39e4532cc0ea0b53b313760588077f9268b085b2e38"} Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.777742 4746 scope.go:117] "RemoveContainer" containerID="4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.785150 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-hpls4" podStartSLOduration=3.785125919 podStartE2EDuration="3.785125919s" podCreationTimestamp="2026-01-29 16:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:05.77315208 +0000 UTC m=+1228.173736724" watchObservedRunningTime="2026-01-29 16:55:05.785125919 +0000 UTC m=+1228.185710563" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.804456 4746 scope.go:117] "RemoveContainer" containerID="81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf" Jan 29 16:55:05 crc kubenswrapper[4746]: E0129 16:55:05.806594 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf\": container with ID starting with 81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf not found: ID does not exist" containerID="81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.820302 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf"} err="failed to get container status \"81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf\": rpc error: code = NotFound desc = could not find container \"81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf\": container with ID starting with 81e856f69e70b8e32f088a5e1930e757e3ba4dfe48f4071bf1d7e4f0cca802bf not found: ID does not exist" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.820371 4746 scope.go:117] "RemoveContainer" containerID="4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf" Jan 29 16:55:05 crc kubenswrapper[4746]: E0129 16:55:05.821075 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf\": container with ID starting with 4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf not found: ID does not exist" containerID="4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.821119 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf"} err="failed to get container status \"4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf\": rpc error: code = NotFound desc = could not find container \"4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf\": container with ID starting with 4cec8982312dae3daf52f27ea3090a22b261590e77840fcbfcf328660b7213bf not found: ID does not exist" Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.850872 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-dxl4s"] Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.857911 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-dxl4s"] Jan 29 16:55:05 crc kubenswrapper[4746]: I0129 16:55:05.858379 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-lh2nd" podStartSLOduration=4.858360773 podStartE2EDuration="4.858360773s" podCreationTimestamp="2026-01-29 16:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:05.850845852 +0000 UTC m=+1228.251430496" watchObservedRunningTime="2026-01-29 16:55:05.858360773 +0000 UTC m=+1228.258945417" Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.459573 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" path="/var/lib/kubelet/pods/990c722e-2e75-4a70-9825-0a17324ecac6/volumes" Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.715126 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.783597 4746 generic.go:334] "Generic (PLEG): container finished" podID="f92e2416-cc6d-4276-a96b-446a90bb18c0" containerID="27f8653b06a0ddb1325cd3a04654b678b389b70a18422b7030b7d50e299dd4c3" exitCode=0 Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.784171 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hpls4" event={"ID":"f92e2416-cc6d-4276-a96b-446a90bb18c0","Type":"ContainerDied","Data":"27f8653b06a0ddb1325cd3a04654b678b389b70a18422b7030b7d50e299dd4c3"} Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.787964 4746 generic.go:334] "Generic (PLEG): container finished" podID="cb9db3bc-78e5-462d-80cb-8022f80959ab" containerID="2d5aee9a083acfc810858a7c87db20c7c5b3dafb9632e60480711cbea239dba1" exitCode=0 Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.788029 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0884-account-create-update-m66xk" event={"ID":"cb9db3bc-78e5-462d-80cb-8022f80959ab","Type":"ContainerDied","Data":"2d5aee9a083acfc810858a7c87db20c7c5b3dafb9632e60480711cbea239dba1"} Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.789683 4746 generic.go:334] "Generic (PLEG): container finished" podID="3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" containerID="79e1251fb71ce11a43cf32e7a28779e697303a7079a8d785c6ab9099c472b0a2" exitCode=0 Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.789742 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vz8vc" event={"ID":"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1","Type":"ContainerDied","Data":"79e1251fb71ce11a43cf32e7a28779e697303a7079a8d785c6ab9099c472b0a2"} Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.791174 4746 generic.go:334] "Generic (PLEG): container finished" podID="d4af9fe5-b4be-4952-97ec-60c8d00703e9" containerID="0c10cc49ecb618eb08c28b96af93caf437f2eab603b005ba55fa890df2e8cb3d" exitCode=0 Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.791285 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lh2nd" event={"ID":"d4af9fe5-b4be-4952-97ec-60c8d00703e9","Type":"ContainerDied","Data":"0c10cc49ecb618eb08c28b96af93caf437f2eab603b005ba55fa890df2e8cb3d"} Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.792749 4746 generic.go:334] "Generic (PLEG): container finished" podID="5d60a101-b5c2-4280-8d06-c7556eaf1535" containerID="5b1e351f12ff9822899af90b93ad119157ebdcc79e119352d35d3f52ab18cf79" exitCode=0 Jan 29 16:55:06 crc kubenswrapper[4746]: I0129 16:55:06.792953 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4017-account-create-update-x8q6t" event={"ID":"5d60a101-b5c2-4280-8d06-c7556eaf1535","Type":"ContainerDied","Data":"5b1e351f12ff9822899af90b93ad119157ebdcc79e119352d35d3f52ab18cf79"} Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.251759 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.258976 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.351251 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac094691-abb2-4295-bbe9-13b698b6b315-operator-scripts\") pod \"ac094691-abb2-4295-bbe9-13b698b6b315\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.351430 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp2sg\" (UniqueName: \"kubernetes.io/projected/ac094691-abb2-4295-bbe9-13b698b6b315-kube-api-access-jp2sg\") pod \"ac094691-abb2-4295-bbe9-13b698b6b315\" (UID: \"ac094691-abb2-4295-bbe9-13b698b6b315\") " Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.352127 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac094691-abb2-4295-bbe9-13b698b6b315-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac094691-abb2-4295-bbe9-13b698b6b315" (UID: "ac094691-abb2-4295-bbe9-13b698b6b315"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.356889 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac094691-abb2-4295-bbe9-13b698b6b315-kube-api-access-jp2sg" (OuterVolumeSpecName: "kube-api-access-jp2sg") pod "ac094691-abb2-4295-bbe9-13b698b6b315" (UID: "ac094691-abb2-4295-bbe9-13b698b6b315"). InnerVolumeSpecName "kube-api-access-jp2sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.453443 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blgmp\" (UniqueName: \"kubernetes.io/projected/656137fe-0a34-408e-bb53-7817651630fa-kube-api-access-blgmp\") pod \"656137fe-0a34-408e-bb53-7817651630fa\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.453735 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656137fe-0a34-408e-bb53-7817651630fa-operator-scripts\") pod \"656137fe-0a34-408e-bb53-7817651630fa\" (UID: \"656137fe-0a34-408e-bb53-7817651630fa\") " Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.454073 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac094691-abb2-4295-bbe9-13b698b6b315-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.454090 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp2sg\" (UniqueName: \"kubernetes.io/projected/ac094691-abb2-4295-bbe9-13b698b6b315-kube-api-access-jp2sg\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.454294 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656137fe-0a34-408e-bb53-7817651630fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "656137fe-0a34-408e-bb53-7817651630fa" (UID: "656137fe-0a34-408e-bb53-7817651630fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.457310 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656137fe-0a34-408e-bb53-7817651630fa-kube-api-access-blgmp" (OuterVolumeSpecName: "kube-api-access-blgmp") pod "656137fe-0a34-408e-bb53-7817651630fa" (UID: "656137fe-0a34-408e-bb53-7817651630fa"). InnerVolumeSpecName "kube-api-access-blgmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.555832 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blgmp\" (UniqueName: \"kubernetes.io/projected/656137fe-0a34-408e-bb53-7817651630fa-kube-api-access-blgmp\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.555885 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656137fe-0a34-408e-bb53-7817651630fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.805031 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-ddwh5" event={"ID":"656137fe-0a34-408e-bb53-7817651630fa","Type":"ContainerDied","Data":"610fb8e05c4670b6d24aac1b9be6c95f1887849497e056e93dcb1462a435eda2"} Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.805076 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610fb8e05c4670b6d24aac1b9be6c95f1887849497e056e93dcb1462a435eda2" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.805085 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-ddwh5" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.806850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d95d-account-create-update-7v4ph" event={"ID":"ac094691-abb2-4295-bbe9-13b698b6b315","Type":"ContainerDied","Data":"76a3b19ff66c5de953bb2625bca74678dc441bb3a1b3cbcc0793d4a2ba95a8c1"} Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.806881 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a3b19ff66c5de953bb2625bca74678dc441bb3a1b3cbcc0793d4a2ba95a8c1" Jan 29 16:55:07 crc kubenswrapper[4746]: I0129 16:55:07.806969 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-7v4ph" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.240547 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.268910 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x746\" (UniqueName: \"kubernetes.io/projected/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-kube-api-access-2x746\") pod \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.268959 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-operator-scripts\") pod \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\" (UID: \"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.270059 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" (UID: "3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.273388 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-kube-api-access-2x746" (OuterVolumeSpecName: "kube-api-access-2x746") pod "3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" (UID: "3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1"). InnerVolumeSpecName "kube-api-access-2x746". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.371900 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x746\" (UniqueName: \"kubernetes.io/projected/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-kube-api-access-2x746\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.371927 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.493109 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hpls4" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.496610 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.511779 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.537128 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.574964 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpkbh\" (UniqueName: \"kubernetes.io/projected/cb9db3bc-78e5-462d-80cb-8022f80959ab-kube-api-access-rpkbh\") pod \"cb9db3bc-78e5-462d-80cb-8022f80959ab\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.575415 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d60a101-b5c2-4280-8d06-c7556eaf1535-operator-scripts\") pod \"5d60a101-b5c2-4280-8d06-c7556eaf1535\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.575593 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t87zj\" (UniqueName: \"kubernetes.io/projected/f92e2416-cc6d-4276-a96b-446a90bb18c0-kube-api-access-t87zj\") pod \"f92e2416-cc6d-4276-a96b-446a90bb18c0\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.575759 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvsz\" (UniqueName: \"kubernetes.io/projected/5d60a101-b5c2-4280-8d06-c7556eaf1535-kube-api-access-9nvsz\") pod \"5d60a101-b5c2-4280-8d06-c7556eaf1535\" (UID: \"5d60a101-b5c2-4280-8d06-c7556eaf1535\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.575898 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d60a101-b5c2-4280-8d06-c7556eaf1535-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d60a101-b5c2-4280-8d06-c7556eaf1535" (UID: "5d60a101-b5c2-4280-8d06-c7556eaf1535"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.576039 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9db3bc-78e5-462d-80cb-8022f80959ab-operator-scripts\") pod \"cb9db3bc-78e5-462d-80cb-8022f80959ab\" (UID: \"cb9db3bc-78e5-462d-80cb-8022f80959ab\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.576178 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhmfk\" (UniqueName: \"kubernetes.io/projected/d4af9fe5-b4be-4952-97ec-60c8d00703e9-kube-api-access-vhmfk\") pod \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.576388 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92e2416-cc6d-4276-a96b-446a90bb18c0-operator-scripts\") pod \"f92e2416-cc6d-4276-a96b-446a90bb18c0\" (UID: \"f92e2416-cc6d-4276-a96b-446a90bb18c0\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.576660 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4af9fe5-b4be-4952-97ec-60c8d00703e9-operator-scripts\") pod \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\" (UID: \"d4af9fe5-b4be-4952-97ec-60c8d00703e9\") " Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.576802 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb9db3bc-78e5-462d-80cb-8022f80959ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb9db3bc-78e5-462d-80cb-8022f80959ab" (UID: "cb9db3bc-78e5-462d-80cb-8022f80959ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.577140 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92e2416-cc6d-4276-a96b-446a90bb18c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f92e2416-cc6d-4276-a96b-446a90bb18c0" (UID: "f92e2416-cc6d-4276-a96b-446a90bb18c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.577488 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4af9fe5-b4be-4952-97ec-60c8d00703e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4af9fe5-b4be-4952-97ec-60c8d00703e9" (UID: "d4af9fe5-b4be-4952-97ec-60c8d00703e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.577930 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f92e2416-cc6d-4276-a96b-446a90bb18c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.578061 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4af9fe5-b4be-4952-97ec-60c8d00703e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.578318 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d60a101-b5c2-4280-8d06-c7556eaf1535-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.578588 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb9db3bc-78e5-462d-80cb-8022f80959ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.578718 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9db3bc-78e5-462d-80cb-8022f80959ab-kube-api-access-rpkbh" (OuterVolumeSpecName: "kube-api-access-rpkbh") pod "cb9db3bc-78e5-462d-80cb-8022f80959ab" (UID: "cb9db3bc-78e5-462d-80cb-8022f80959ab"). InnerVolumeSpecName "kube-api-access-rpkbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.579217 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92e2416-cc6d-4276-a96b-446a90bb18c0-kube-api-access-t87zj" (OuterVolumeSpecName: "kube-api-access-t87zj") pod "f92e2416-cc6d-4276-a96b-446a90bb18c0" (UID: "f92e2416-cc6d-4276-a96b-446a90bb18c0"). InnerVolumeSpecName "kube-api-access-t87zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.581183 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d60a101-b5c2-4280-8d06-c7556eaf1535-kube-api-access-9nvsz" (OuterVolumeSpecName: "kube-api-access-9nvsz") pod "5d60a101-b5c2-4280-8d06-c7556eaf1535" (UID: "5d60a101-b5c2-4280-8d06-c7556eaf1535"). InnerVolumeSpecName "kube-api-access-9nvsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.581313 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4af9fe5-b4be-4952-97ec-60c8d00703e9-kube-api-access-vhmfk" (OuterVolumeSpecName: "kube-api-access-vhmfk") pod "d4af9fe5-b4be-4952-97ec-60c8d00703e9" (UID: "d4af9fe5-b4be-4952-97ec-60c8d00703e9"). InnerVolumeSpecName "kube-api-access-vhmfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.680361 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t87zj\" (UniqueName: \"kubernetes.io/projected/f92e2416-cc6d-4276-a96b-446a90bb18c0-kube-api-access-t87zj\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.680394 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvsz\" (UniqueName: \"kubernetes.io/projected/5d60a101-b5c2-4280-8d06-c7556eaf1535-kube-api-access-9nvsz\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.680404 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhmfk\" (UniqueName: \"kubernetes.io/projected/d4af9fe5-b4be-4952-97ec-60c8d00703e9-kube-api-access-vhmfk\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.680414 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpkbh\" (UniqueName: \"kubernetes.io/projected/cb9db3bc-78e5-462d-80cb-8022f80959ab-kube-api-access-rpkbh\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.819460 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hpls4" event={"ID":"f92e2416-cc6d-4276-a96b-446a90bb18c0","Type":"ContainerDied","Data":"6f58859585676c0965d5722f070c1e7026b6a40acdbdd5e44af33d4284b312a8"} Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.819498 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hpls4" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.819507 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f58859585676c0965d5722f070c1e7026b6a40acdbdd5e44af33d4284b312a8" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.821151 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0884-account-create-update-m66xk" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.821150 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0884-account-create-update-m66xk" event={"ID":"cb9db3bc-78e5-462d-80cb-8022f80959ab","Type":"ContainerDied","Data":"e10bae969f747867594da39e4532cc0ea0b53b313760588077f9268b085b2e38"} Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.821267 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10bae969f747867594da39e4532cc0ea0b53b313760588077f9268b085b2e38" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.822938 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vz8vc" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.822730 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vz8vc" event={"ID":"3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1","Type":"ContainerDied","Data":"cbe630a9ea7f0a56c549f8fb7c0c794654138c979ee336552d83e594329c02f2"} Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.823048 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe630a9ea7f0a56c549f8fb7c0c794654138c979ee336552d83e594329c02f2" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.825183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4017-account-create-update-x8q6t" event={"ID":"5d60a101-b5c2-4280-8d06-c7556eaf1535","Type":"ContainerDied","Data":"583e3edb3f37afd34c2a203c7b5124b65b5698862612e102d5258b1e72223870"} Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.825246 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="583e3edb3f37afd34c2a203c7b5124b65b5698862612e102d5258b1e72223870" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.825300 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4017-account-create-update-x8q6t" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.827924 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lh2nd" event={"ID":"d4af9fe5-b4be-4952-97ec-60c8d00703e9","Type":"ContainerDied","Data":"599cf49d0112de8137f63c1eea24c6ad15e726ef0cdd1f152509515beafd95b3"} Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.827961 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599cf49d0112de8137f63c1eea24c6ad15e726ef0cdd1f152509515beafd95b3" Jan 29 16:55:08 crc kubenswrapper[4746]: I0129 16:55:08.828009 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lh2nd" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.332382 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-ddwh5"] Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.338797 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-ddwh5"] Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436154 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9zlds"] Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436663 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4af9fe5-b4be-4952-97ec-60c8d00703e9" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436686 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4af9fe5-b4be-4952-97ec-60c8d00703e9" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436721 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436731 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436750 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d60a101-b5c2-4280-8d06-c7556eaf1535" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436760 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d60a101-b5c2-4280-8d06-c7556eaf1535" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436778 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" containerName="init" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436787 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" containerName="init" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436800 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92e2416-cc6d-4276-a96b-446a90bb18c0" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436811 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92e2416-cc6d-4276-a96b-446a90bb18c0" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436828 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9db3bc-78e5-462d-80cb-8022f80959ab" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436841 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9db3bc-78e5-462d-80cb-8022f80959ab" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436865 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac094691-abb2-4295-bbe9-13b698b6b315" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436875 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac094691-abb2-4295-bbe9-13b698b6b315" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436888 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" containerName="dnsmasq-dns" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436898 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" containerName="dnsmasq-dns" Jan 29 16:55:09 crc kubenswrapper[4746]: E0129 16:55:09.436910 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656137fe-0a34-408e-bb53-7817651630fa" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.436918 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="656137fe-0a34-408e-bb53-7817651630fa" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437148 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4af9fe5-b4be-4952-97ec-60c8d00703e9" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437164 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="990c722e-2e75-4a70-9825-0a17324ecac6" containerName="dnsmasq-dns" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437175 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac094691-abb2-4295-bbe9-13b698b6b315" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437217 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92e2416-cc6d-4276-a96b-446a90bb18c0" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437231 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" containerName="mariadb-database-create" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437249 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="656137fe-0a34-408e-bb53-7817651630fa" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437258 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9db3bc-78e5-462d-80cb-8022f80959ab" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437274 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d60a101-b5c2-4280-8d06-c7556eaf1535" containerName="mariadb-account-create-update" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.437971 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.439775 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.447215 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9zlds"] Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.494431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg7vk\" (UniqueName: \"kubernetes.io/projected/239c5843-2971-4976-a276-82689a1ee336-kube-api-access-lg7vk\") pod \"root-account-create-update-9zlds\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.494512 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c5843-2971-4976-a276-82689a1ee336-operator-scripts\") pod \"root-account-create-update-9zlds\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.595634 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg7vk\" (UniqueName: \"kubernetes.io/projected/239c5843-2971-4976-a276-82689a1ee336-kube-api-access-lg7vk\") pod \"root-account-create-update-9zlds\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.595718 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c5843-2971-4976-a276-82689a1ee336-operator-scripts\") pod \"root-account-create-update-9zlds\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.596936 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c5843-2971-4976-a276-82689a1ee336-operator-scripts\") pod \"root-account-create-update-9zlds\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.617921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg7vk\" (UniqueName: \"kubernetes.io/projected/239c5843-2971-4976-a276-82689a1ee336-kube-api-access-lg7vk\") pod \"root-account-create-update-9zlds\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:09 crc kubenswrapper[4746]: I0129 16:55:09.752750 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:10 crc kubenswrapper[4746]: I0129 16:55:10.215489 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9zlds"] Jan 29 16:55:10 crc kubenswrapper[4746]: W0129 16:55:10.221302 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod239c5843_2971_4976_a276_82689a1ee336.slice/crio-325761bb8233c91d52bb88423d09f09f242a7db367deb6ce5ed82b3dde887106 WatchSource:0}: Error finding container 325761bb8233c91d52bb88423d09f09f242a7db367deb6ce5ed82b3dde887106: Status 404 returned error can't find the container with id 325761bb8233c91d52bb88423d09f09f242a7db367deb6ce5ed82b3dde887106 Jan 29 16:55:10 crc kubenswrapper[4746]: I0129 16:55:10.454568 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656137fe-0a34-408e-bb53-7817651630fa" path="/var/lib/kubelet/pods/656137fe-0a34-408e-bb53-7817651630fa/volumes" Jan 29 16:55:10 crc kubenswrapper[4746]: I0129 16:55:10.862623 4746 generic.go:334] "Generic (PLEG): container finished" podID="239c5843-2971-4976-a276-82689a1ee336" containerID="3376338c9ce4227a9c44f1784e6769778b27bd95c1b827647fc32a1f0b5f511b" exitCode=0 Jan 29 16:55:10 crc kubenswrapper[4746]: I0129 16:55:10.862744 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zlds" event={"ID":"239c5843-2971-4976-a276-82689a1ee336","Type":"ContainerDied","Data":"3376338c9ce4227a9c44f1784e6769778b27bd95c1b827647fc32a1f0b5f511b"} Jan 29 16:55:10 crc kubenswrapper[4746]: I0129 16:55:10.862805 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zlds" event={"ID":"239c5843-2971-4976-a276-82689a1ee336","Type":"ContainerStarted","Data":"325761bb8233c91d52bb88423d09f09f242a7db367deb6ce5ed82b3dde887106"} Jan 29 16:55:11 crc kubenswrapper[4746]: I0129 16:55:11.428101 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:55:11 crc kubenswrapper[4746]: E0129 16:55:11.428549 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:55:11 crc kubenswrapper[4746]: E0129 16:55:11.428782 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:55:11 crc kubenswrapper[4746]: E0129 16:55:11.428866 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift podName:4434dba0-90da-4ac0-8cd4-5c2babfdb2eb nodeName:}" failed. No retries permitted until 2026-01-29 16:55:27.428843186 +0000 UTC m=+1249.829427850 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift") pod "swift-storage-0" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb") : configmap "swift-ring-files" not found Jan 29 16:55:11 crc kubenswrapper[4746]: I0129 16:55:11.875928 4746 generic.go:334] "Generic (PLEG): container finished" podID="2ab64d41-3d73-42d4-abfc-7c65b9c54970" containerID="f20c1208bb170c0dc12ec84c9358d47475e98d721184240642696df4d5199cc4" exitCode=0 Jan 29 16:55:11 crc kubenswrapper[4746]: I0129 16:55:11.876068 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p9k8d" event={"ID":"2ab64d41-3d73-42d4-abfc-7c65b9c54970","Type":"ContainerDied","Data":"f20c1208bb170c0dc12ec84c9358d47475e98d721184240642696df4d5199cc4"} Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.233573 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.240068 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c5843-2971-4976-a276-82689a1ee336-operator-scripts\") pod \"239c5843-2971-4976-a276-82689a1ee336\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.240132 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg7vk\" (UniqueName: \"kubernetes.io/projected/239c5843-2971-4976-a276-82689a1ee336-kube-api-access-lg7vk\") pod \"239c5843-2971-4976-a276-82689a1ee336\" (UID: \"239c5843-2971-4976-a276-82689a1ee336\") " Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.241476 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239c5843-2971-4976-a276-82689a1ee336-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "239c5843-2971-4976-a276-82689a1ee336" (UID: "239c5843-2971-4976-a276-82689a1ee336"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.246070 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239c5843-2971-4976-a276-82689a1ee336-kube-api-access-lg7vk" (OuterVolumeSpecName: "kube-api-access-lg7vk") pod "239c5843-2971-4976-a276-82689a1ee336" (UID: "239c5843-2971-4976-a276-82689a1ee336"). InnerVolumeSpecName "kube-api-access-lg7vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.341878 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/239c5843-2971-4976-a276-82689a1ee336-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.341908 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg7vk\" (UniqueName: \"kubernetes.io/projected/239c5843-2971-4976-a276-82689a1ee336-kube-api-access-lg7vk\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.591342 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pplw4" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerName="ovn-controller" probeResult="failure" output=< Jan 29 16:55:12 crc kubenswrapper[4746]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 16:55:12 crc kubenswrapper[4746]: > Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.675269 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.675829 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.803022 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qpnkt"] Jan 29 16:55:12 crc kubenswrapper[4746]: E0129 16:55:12.803358 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239c5843-2971-4976-a276-82689a1ee336" containerName="mariadb-account-create-update" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.803371 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="239c5843-2971-4976-a276-82689a1ee336" containerName="mariadb-account-create-update" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.803549 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="239c5843-2971-4976-a276-82689a1ee336" containerName="mariadb-account-create-update" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.804080 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.806350 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.807631 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zpldn" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.818478 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qpnkt"] Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.851818 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-db-sync-config-data\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.852339 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-combined-ca-bundle\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.852570 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbw8\" (UniqueName: \"kubernetes.io/projected/cc2d9bf4-a560-4888-bd41-01b29066a20c-kube-api-access-bhbw8\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.852615 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-config-data\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.884351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zlds" event={"ID":"239c5843-2971-4976-a276-82689a1ee336","Type":"ContainerDied","Data":"325761bb8233c91d52bb88423d09f09f242a7db367deb6ce5ed82b3dde887106"} Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.884396 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325761bb8233c91d52bb88423d09f09f242a7db367deb6ce5ed82b3dde887106" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.884577 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zlds" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.915219 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pplw4-config-6g6wj"] Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.917466 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.919455 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.929988 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pplw4-config-6g6wj"] Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.954871 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-combined-ca-bundle\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.954918 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brxmv\" (UniqueName: \"kubernetes.io/projected/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-kube-api-access-brxmv\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.954955 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-additional-scripts\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955009 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run-ovn\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-log-ovn\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955048 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-scripts\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955098 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhbw8\" (UniqueName: \"kubernetes.io/projected/cc2d9bf4-a560-4888-bd41-01b29066a20c-kube-api-access-bhbw8\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955117 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-config-data\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955135 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.955242 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-db-sync-config-data\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.962547 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-config-data\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.965576 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-combined-ca-bundle\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.968641 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-db-sync-config-data\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:12 crc kubenswrapper[4746]: I0129 16:55:12.975483 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhbw8\" (UniqueName: \"kubernetes.io/projected/cc2d9bf4-a560-4888-bd41-01b29066a20c-kube-api-access-bhbw8\") pod \"glance-db-sync-qpnkt\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.056784 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.057200 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brxmv\" (UniqueName: \"kubernetes.io/projected/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-kube-api-access-brxmv\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.057227 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-additional-scripts\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.057252 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run-ovn\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.057274 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-log-ovn\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.057294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-scripts\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.059146 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-scripts\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.059383 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.060017 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-additional-scripts\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.060069 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run-ovn\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.060104 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-log-ovn\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.076805 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brxmv\" (UniqueName: \"kubernetes.io/projected/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-kube-api-access-brxmv\") pod \"ovn-controller-pplw4-config-6g6wj\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.122316 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.236371 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.319488 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.366904 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-swiftconf\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.367754 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-ring-data-devices\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.367803 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2ab64d41-3d73-42d4-abfc-7c65b9c54970-etc-swift\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.367848 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-scripts\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.367876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snqh6\" (UniqueName: \"kubernetes.io/projected/2ab64d41-3d73-42d4-abfc-7c65b9c54970-kube-api-access-snqh6\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.367942 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-dispersionconf\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.368034 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-combined-ca-bundle\") pod \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\" (UID: \"2ab64d41-3d73-42d4-abfc-7c65b9c54970\") " Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.368752 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.368903 4746 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.369029 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ab64d41-3d73-42d4-abfc-7c65b9c54970-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.386035 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab64d41-3d73-42d4-abfc-7c65b9c54970-kube-api-access-snqh6" (OuterVolumeSpecName: "kube-api-access-snqh6") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "kube-api-access-snqh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.391964 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.394071 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.399200 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.403111 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-scripts" (OuterVolumeSpecName: "scripts") pod "2ab64d41-3d73-42d4-abfc-7c65b9c54970" (UID: "2ab64d41-3d73-42d4-abfc-7c65b9c54970"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.470785 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.470816 4746 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.470825 4746 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2ab64d41-3d73-42d4-abfc-7c65b9c54970-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.470834 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2ab64d41-3d73-42d4-abfc-7c65b9c54970-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.470844 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snqh6\" (UniqueName: \"kubernetes.io/projected/2ab64d41-3d73-42d4-abfc-7c65b9c54970-kube-api-access-snqh6\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.470855 4746 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2ab64d41-3d73-42d4-abfc-7c65b9c54970-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.663520 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qpnkt"] Jan 29 16:55:13 crc kubenswrapper[4746]: W0129 16:55:13.665161 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc2d9bf4_a560_4888_bd41_01b29066a20c.slice/crio-577306c93bda4270714b5d4c0060959819360229049f414d903ed3c9553b47f4 WatchSource:0}: Error finding container 577306c93bda4270714b5d4c0060959819360229049f414d903ed3c9553b47f4: Status 404 returned error can't find the container with id 577306c93bda4270714b5d4c0060959819360229049f414d903ed3c9553b47f4 Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.718868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pplw4-config-6g6wj"] Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.898040 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4-config-6g6wj" event={"ID":"eb2dc48b-567c-4a7d-a3bb-87046c4689b2","Type":"ContainerStarted","Data":"27e2c5f0fb4a7e7bfb89349a8933c882f83bb2e0513a22895bfcdf82bf7d80eb"} Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.899912 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p9k8d" event={"ID":"2ab64d41-3d73-42d4-abfc-7c65b9c54970","Type":"ContainerDied","Data":"c05df8cda690f07c701f9b8fc08d1e029bbab99d29657a29c630803111876a40"} Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.899953 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p9k8d" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.899959 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05df8cda690f07c701f9b8fc08d1e029bbab99d29657a29c630803111876a40" Jan 29 16:55:13 crc kubenswrapper[4746]: I0129 16:55:13.902469 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qpnkt" event={"ID":"cc2d9bf4-a560-4888-bd41-01b29066a20c","Type":"ContainerStarted","Data":"577306c93bda4270714b5d4c0060959819360229049f414d903ed3c9553b47f4"} Jan 29 16:55:14 crc kubenswrapper[4746]: I0129 16:55:14.911001 4746 generic.go:334] "Generic (PLEG): container finished" podID="eb2dc48b-567c-4a7d-a3bb-87046c4689b2" containerID="91fce55c9d75c1b331d8bd42c9897a8976e8ccd42870a105712562f5ecc517d2" exitCode=0 Jan 29 16:55:14 crc kubenswrapper[4746]: I0129 16:55:14.911313 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4-config-6g6wj" event={"ID":"eb2dc48b-567c-4a7d-a3bb-87046c4689b2","Type":"ContainerDied","Data":"91fce55c9d75c1b331d8bd42c9897a8976e8ccd42870a105712562f5ecc517d2"} Jan 29 16:55:15 crc kubenswrapper[4746]: I0129 16:55:15.932574 4746 generic.go:334] "Generic (PLEG): container finished" podID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerID="560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255" exitCode=0 Jan 29 16:55:15 crc kubenswrapper[4746]: I0129 16:55:15.932786 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788","Type":"ContainerDied","Data":"560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255"} Jan 29 16:55:15 crc kubenswrapper[4746]: I0129 16:55:15.973140 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9zlds"] Jan 29 16:55:15 crc kubenswrapper[4746]: I0129 16:55:15.981483 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9zlds"] Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.284993 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.329721 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brxmv\" (UniqueName: \"kubernetes.io/projected/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-kube-api-access-brxmv\") pod \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330095 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-log-ovn\") pod \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330215 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "eb2dc48b-567c-4a7d-a3bb-87046c4689b2" (UID: "eb2dc48b-567c-4a7d-a3bb-87046c4689b2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330389 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run\") pod \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330466 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run-ovn\") pod \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330402 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run" (OuterVolumeSpecName: "var-run") pod "eb2dc48b-567c-4a7d-a3bb-87046c4689b2" (UID: "eb2dc48b-567c-4a7d-a3bb-87046c4689b2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330562 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-scripts\") pod \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.330633 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "eb2dc48b-567c-4a7d-a3bb-87046c4689b2" (UID: "eb2dc48b-567c-4a7d-a3bb-87046c4689b2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.331020 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-additional-scripts\") pod \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\" (UID: \"eb2dc48b-567c-4a7d-a3bb-87046c4689b2\") " Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.331631 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "eb2dc48b-567c-4a7d-a3bb-87046c4689b2" (UID: "eb2dc48b-567c-4a7d-a3bb-87046c4689b2"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.331710 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-scripts" (OuterVolumeSpecName: "scripts") pod "eb2dc48b-567c-4a7d-a3bb-87046c4689b2" (UID: "eb2dc48b-567c-4a7d-a3bb-87046c4689b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.331993 4746 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.332089 4746 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.332164 4746 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.332274 4746 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.332539 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.336031 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-kube-api-access-brxmv" (OuterVolumeSpecName: "kube-api-access-brxmv") pod "eb2dc48b-567c-4a7d-a3bb-87046c4689b2" (UID: "eb2dc48b-567c-4a7d-a3bb-87046c4689b2"). InnerVolumeSpecName "kube-api-access-brxmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.434418 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brxmv\" (UniqueName: \"kubernetes.io/projected/eb2dc48b-567c-4a7d-a3bb-87046c4689b2-kube-api-access-brxmv\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.466423 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239c5843-2971-4976-a276-82689a1ee336" path="/var/lib/kubelet/pods/239c5843-2971-4976-a276-82689a1ee336/volumes" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.942472 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788","Type":"ContainerStarted","Data":"9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553"} Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.943075 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.949894 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4-config-6g6wj" event={"ID":"eb2dc48b-567c-4a7d-a3bb-87046c4689b2","Type":"ContainerDied","Data":"27e2c5f0fb4a7e7bfb89349a8933c882f83bb2e0513a22895bfcdf82bf7d80eb"} Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.949939 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27e2c5f0fb4a7e7bfb89349a8933c882f83bb2e0513a22895bfcdf82bf7d80eb" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.949994 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-6g6wj" Jan 29 16:55:16 crc kubenswrapper[4746]: I0129 16:55:16.978294 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.594945588 podStartE2EDuration="1m19.978270618s" podCreationTimestamp="2026-01-29 16:53:57 +0000 UTC" firstStartedPulling="2026-01-29 16:53:59.552434408 +0000 UTC m=+1161.953019052" lastFinishedPulling="2026-01-29 16:54:41.935759438 +0000 UTC m=+1204.336344082" observedRunningTime="2026-01-29 16:55:16.974688401 +0000 UTC m=+1239.375273045" watchObservedRunningTime="2026-01-29 16:55:16.978270618 +0000 UTC m=+1239.378855252" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.381227 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pplw4-config-6g6wj"] Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.387856 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pplw4-config-6g6wj"] Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.493929 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pplw4-config-fnrw9"] Jan 29 16:55:17 crc kubenswrapper[4746]: E0129 16:55:17.494312 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab64d41-3d73-42d4-abfc-7c65b9c54970" containerName="swift-ring-rebalance" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.494338 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab64d41-3d73-42d4-abfc-7c65b9c54970" containerName="swift-ring-rebalance" Jan 29 16:55:17 crc kubenswrapper[4746]: E0129 16:55:17.494352 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2dc48b-567c-4a7d-a3bb-87046c4689b2" containerName="ovn-config" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.494360 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2dc48b-567c-4a7d-a3bb-87046c4689b2" containerName="ovn-config" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.494519 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2dc48b-567c-4a7d-a3bb-87046c4689b2" containerName="ovn-config" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.494535 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab64d41-3d73-42d4-abfc-7c65b9c54970" containerName="swift-ring-rebalance" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.495045 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.498722 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.513095 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pplw4-config-fnrw9"] Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.555386 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.555475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-scripts\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.555661 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-log-ovn\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.555716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-additional-scripts\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.555778 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run-ovn\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.555960 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlsgq\" (UniqueName: \"kubernetes.io/projected/bd76c646-881a-47e3-877a-123797edaa0e-kube-api-access-zlsgq\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.590237 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pplw4" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.657968 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.658027 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-scripts\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.658131 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-log-ovn\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.659003 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-additional-scripts\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.659052 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run-ovn\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.659365 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlsgq\" (UniqueName: \"kubernetes.io/projected/bd76c646-881a-47e3-877a-123797edaa0e-kube-api-access-zlsgq\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.658775 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.658839 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-log-ovn\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.660621 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-additional-scripts\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.660715 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run-ovn\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.660727 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-scripts\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.694340 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlsgq\" (UniqueName: \"kubernetes.io/projected/bd76c646-881a-47e3-877a-123797edaa0e-kube-api-access-zlsgq\") pod \"ovn-controller-pplw4-config-fnrw9\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:17 crc kubenswrapper[4746]: I0129 16:55:17.813984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:18 crc kubenswrapper[4746]: I0129 16:55:18.263106 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pplw4-config-fnrw9"] Jan 29 16:55:18 crc kubenswrapper[4746]: I0129 16:55:18.457894 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb2dc48b-567c-4a7d-a3bb-87046c4689b2" path="/var/lib/kubelet/pods/eb2dc48b-567c-4a7d-a3bb-87046c4689b2/volumes" Jan 29 16:55:18 crc kubenswrapper[4746]: I0129 16:55:18.969919 4746 generic.go:334] "Generic (PLEG): container finished" podID="bd76c646-881a-47e3-877a-123797edaa0e" containerID="ad4b6ab3285c9071345dd17ada713cdadb52fd39a2d489befc05fc5b022fff09" exitCode=0 Jan 29 16:55:18 crc kubenswrapper[4746]: I0129 16:55:18.969985 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4-config-fnrw9" event={"ID":"bd76c646-881a-47e3-877a-123797edaa0e","Type":"ContainerDied","Data":"ad4b6ab3285c9071345dd17ada713cdadb52fd39a2d489befc05fc5b022fff09"} Jan 29 16:55:18 crc kubenswrapper[4746]: I0129 16:55:18.970030 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4-config-fnrw9" event={"ID":"bd76c646-881a-47e3-877a-123797edaa0e","Type":"ContainerStarted","Data":"e6d11202939d28f0200ca9c8b08f877b4ae7a7f0bd108d17fa2647a2de45c7fc"} Jan 29 16:55:20 crc kubenswrapper[4746]: I0129 16:55:20.994274 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-m4gmj"] Jan 29 16:55:20 crc kubenswrapper[4746]: I0129 16:55:20.995756 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:20 crc kubenswrapper[4746]: I0129 16:55:20.997892 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.003089 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m4gmj"] Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.030159 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8n7\" (UniqueName: \"kubernetes.io/projected/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-kube-api-access-fk8n7\") pod \"root-account-create-update-m4gmj\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.030258 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-operator-scripts\") pod \"root-account-create-update-m4gmj\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.132255 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8n7\" (UniqueName: \"kubernetes.io/projected/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-kube-api-access-fk8n7\") pod \"root-account-create-update-m4gmj\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.133302 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-operator-scripts\") pod \"root-account-create-update-m4gmj\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.135787 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-operator-scripts\") pod \"root-account-create-update-m4gmj\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.159009 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8n7\" (UniqueName: \"kubernetes.io/projected/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-kube-api-access-fk8n7\") pod \"root-account-create-update-m4gmj\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.314513 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.992455 4746 generic.go:334] "Generic (PLEG): container finished" podID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerID="f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e" exitCode=0 Jan 29 16:55:21 crc kubenswrapper[4746]: I0129 16:55:21.992518 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71c96526-7c37-42c2-896e-b551dd6ed5b8","Type":"ContainerDied","Data":"f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e"} Jan 29 16:55:27 crc kubenswrapper[4746]: I0129 16:55:27.448092 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:55:27 crc kubenswrapper[4746]: I0129 16:55:27.454804 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"swift-storage-0\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " pod="openstack/swift-storage-0" Jan 29 16:55:27 crc kubenswrapper[4746]: I0129 16:55:27.663745 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.679258 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 29 16:55:28 crc kubenswrapper[4746]: E0129 16:55:28.684653 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 29 16:55:28 crc kubenswrapper[4746]: E0129 16:55:28.684885 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhbw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-qpnkt_openstack(cc2d9bf4-a560-4888-bd41-01b29066a20c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:55:28 crc kubenswrapper[4746]: E0129 16:55:28.686013 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-qpnkt" podUID="cc2d9bf4-a560-4888-bd41-01b29066a20c" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.846460 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.876014 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-additional-scripts\") pod \"bd76c646-881a-47e3-877a-123797edaa0e\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.876066 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-scripts\") pod \"bd76c646-881a-47e3-877a-123797edaa0e\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.876229 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-log-ovn\") pod \"bd76c646-881a-47e3-877a-123797edaa0e\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.876277 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run\") pod \"bd76c646-881a-47e3-877a-123797edaa0e\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.876314 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run-ovn\") pod \"bd76c646-881a-47e3-877a-123797edaa0e\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.876367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlsgq\" (UniqueName: \"kubernetes.io/projected/bd76c646-881a-47e3-877a-123797edaa0e-kube-api-access-zlsgq\") pod \"bd76c646-881a-47e3-877a-123797edaa0e\" (UID: \"bd76c646-881a-47e3-877a-123797edaa0e\") " Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.877929 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bd76c646-881a-47e3-877a-123797edaa0e" (UID: "bd76c646-881a-47e3-877a-123797edaa0e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.878582 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bd76c646-881a-47e3-877a-123797edaa0e" (UID: "bd76c646-881a-47e3-877a-123797edaa0e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.878668 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run" (OuterVolumeSpecName: "var-run") pod "bd76c646-881a-47e3-877a-123797edaa0e" (UID: "bd76c646-881a-47e3-877a-123797edaa0e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.878698 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bd76c646-881a-47e3-877a-123797edaa0e" (UID: "bd76c646-881a-47e3-877a-123797edaa0e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.878996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-scripts" (OuterVolumeSpecName: "scripts") pod "bd76c646-881a-47e3-877a-123797edaa0e" (UID: "bd76c646-881a-47e3-877a-123797edaa0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.884998 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd76c646-881a-47e3-877a-123797edaa0e-kube-api-access-zlsgq" (OuterVolumeSpecName: "kube-api-access-zlsgq") pod "bd76c646-881a-47e3-877a-123797edaa0e" (UID: "bd76c646-881a-47e3-877a-123797edaa0e"). InnerVolumeSpecName "kube-api-access-zlsgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.978606 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlsgq\" (UniqueName: \"kubernetes.io/projected/bd76c646-881a-47e3-877a-123797edaa0e-kube-api-access-zlsgq\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.978638 4746 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.978647 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd76c646-881a-47e3-877a-123797edaa0e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.978660 4746 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.978678 4746 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:28 crc kubenswrapper[4746]: I0129 16:55:28.978690 4746 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd76c646-881a-47e3-877a-123797edaa0e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.085663 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4-config-fnrw9" event={"ID":"bd76c646-881a-47e3-877a-123797edaa0e","Type":"ContainerDied","Data":"e6d11202939d28f0200ca9c8b08f877b4ae7a7f0bd108d17fa2647a2de45c7fc"} Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.086006 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6d11202939d28f0200ca9c8b08f877b4ae7a7f0bd108d17fa2647a2de45c7fc" Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.085737 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4-config-fnrw9" Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.087638 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71c96526-7c37-42c2-896e-b551dd6ed5b8","Type":"ContainerStarted","Data":"6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45"} Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.088372 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 16:55:29 crc kubenswrapper[4746]: E0129 16:55:29.090391 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-qpnkt" podUID="cc2d9bf4-a560-4888-bd41-01b29066a20c" Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.142252 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371945.712543 podStartE2EDuration="1m31.142233068s" podCreationTimestamp="2026-01-29 16:53:58 +0000 UTC" firstStartedPulling="2026-01-29 16:53:59.884021635 +0000 UTC m=+1162.284606269" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:29.133686201 +0000 UTC m=+1251.534270845" watchObservedRunningTime="2026-01-29 16:55:29.142233068 +0000 UTC m=+1251.542817712" Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.156944 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-m4gmj"] Jan 29 16:55:29 crc kubenswrapper[4746]: W0129 16:55:29.160361 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66de13d9_6c00_4d6a_88a3_3fb2266d33aa.slice/crio-6bad870b791b54a343fab560b5bb975fd3bb9fc67c0e92fc9363a008c6ab33b0 WatchSource:0}: Error finding container 6bad870b791b54a343fab560b5bb975fd3bb9fc67c0e92fc9363a008c6ab33b0: Status 404 returned error can't find the container with id 6bad870b791b54a343fab560b5bb975fd3bb9fc67c0e92fc9363a008c6ab33b0 Jan 29 16:55:29 crc kubenswrapper[4746]: W0129 16:55:29.315305 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4434dba0_90da_4ac0_8cd4_5c2babfdb2eb.slice/crio-60d22ac1bcf8571bfff39d1d4b4e99ec8689944c0755be29527d1709185809d1 WatchSource:0}: Error finding container 60d22ac1bcf8571bfff39d1d4b4e99ec8689944c0755be29527d1709185809d1: Status 404 returned error can't find the container with id 60d22ac1bcf8571bfff39d1d4b4e99ec8689944c0755be29527d1709185809d1 Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.315646 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.931674 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pplw4-config-fnrw9"] Jan 29 16:55:29 crc kubenswrapper[4746]: I0129 16:55:29.942881 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pplw4-config-fnrw9"] Jan 29 16:55:30 crc kubenswrapper[4746]: I0129 16:55:30.097685 4746 generic.go:334] "Generic (PLEG): container finished" podID="66de13d9-6c00-4d6a-88a3-3fb2266d33aa" containerID="ea8f470075d65d280e96ac2d25ee771c7eb9e3d5af76de3a2e471ff31e55e67f" exitCode=0 Jan 29 16:55:30 crc kubenswrapper[4746]: I0129 16:55:30.097989 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m4gmj" event={"ID":"66de13d9-6c00-4d6a-88a3-3fb2266d33aa","Type":"ContainerDied","Data":"ea8f470075d65d280e96ac2d25ee771c7eb9e3d5af76de3a2e471ff31e55e67f"} Jan 29 16:55:30 crc kubenswrapper[4746]: I0129 16:55:30.098013 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m4gmj" event={"ID":"66de13d9-6c00-4d6a-88a3-3fb2266d33aa","Type":"ContainerStarted","Data":"6bad870b791b54a343fab560b5bb975fd3bb9fc67c0e92fc9363a008c6ab33b0"} Jan 29 16:55:30 crc kubenswrapper[4746]: I0129 16:55:30.100104 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"60d22ac1bcf8571bfff39d1d4b4e99ec8689944c0755be29527d1709185809d1"} Jan 29 16:55:30 crc kubenswrapper[4746]: I0129 16:55:30.455786 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd76c646-881a-47e3-877a-123797edaa0e" path="/var/lib/kubelet/pods/bd76c646-881a-47e3-877a-123797edaa0e/volumes" Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.110408 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596"} Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.110779 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5"} Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.110796 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160"} Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.370603 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.418356 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-operator-scripts\") pod \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.418517 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk8n7\" (UniqueName: \"kubernetes.io/projected/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-kube-api-access-fk8n7\") pod \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\" (UID: \"66de13d9-6c00-4d6a-88a3-3fb2266d33aa\") " Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.419378 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66de13d9-6c00-4d6a-88a3-3fb2266d33aa" (UID: "66de13d9-6c00-4d6a-88a3-3fb2266d33aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.425397 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-kube-api-access-fk8n7" (OuterVolumeSpecName: "kube-api-access-fk8n7") pod "66de13d9-6c00-4d6a-88a3-3fb2266d33aa" (UID: "66de13d9-6c00-4d6a-88a3-3fb2266d33aa"). InnerVolumeSpecName "kube-api-access-fk8n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.520791 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:31 crc kubenswrapper[4746]: I0129 16:55:31.520843 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk8n7\" (UniqueName: \"kubernetes.io/projected/66de13d9-6c00-4d6a-88a3-3fb2266d33aa-kube-api-access-fk8n7\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:32 crc kubenswrapper[4746]: I0129 16:55:32.122855 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-m4gmj" Jan 29 16:55:32 crc kubenswrapper[4746]: I0129 16:55:32.122855 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-m4gmj" event={"ID":"66de13d9-6c00-4d6a-88a3-3fb2266d33aa","Type":"ContainerDied","Data":"6bad870b791b54a343fab560b5bb975fd3bb9fc67c0e92fc9363a008c6ab33b0"} Jan 29 16:55:32 crc kubenswrapper[4746]: I0129 16:55:32.123994 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bad870b791b54a343fab560b5bb975fd3bb9fc67c0e92fc9363a008c6ab33b0" Jan 29 16:55:32 crc kubenswrapper[4746]: I0129 16:55:32.125832 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db"} Jan 29 16:55:35 crc kubenswrapper[4746]: I0129 16:55:35.159067 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a"} Jan 29 16:55:35 crc kubenswrapper[4746]: I0129 16:55:35.159676 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26"} Jan 29 16:55:36 crc kubenswrapper[4746]: I0129 16:55:36.172006 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad"} Jan 29 16:55:36 crc kubenswrapper[4746]: I0129 16:55:36.172356 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e"} Jan 29 16:55:37 crc kubenswrapper[4746]: I0129 16:55:37.189123 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103"} Jan 29 16:55:37 crc kubenswrapper[4746]: I0129 16:55:37.190220 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce"} Jan 29 16:55:37 crc kubenswrapper[4746]: I0129 16:55:37.190234 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf"} Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.209264 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8"} Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.209828 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab"} Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.209842 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844"} Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.209854 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerStarted","Data":"d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2"} Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.253225 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.102651302 podStartE2EDuration="44.253176243s" podCreationTimestamp="2026-01-29 16:54:54 +0000 UTC" firstStartedPulling="2026-01-29 16:55:29.317272277 +0000 UTC m=+1251.717856921" lastFinishedPulling="2026-01-29 16:55:36.467797218 +0000 UTC m=+1258.868381862" observedRunningTime="2026-01-29 16:55:38.24894261 +0000 UTC m=+1260.649527264" watchObservedRunningTime="2026-01-29 16:55:38.253176243 +0000 UTC m=+1260.653760897" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.543615 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9lnns"] Jan 29 16:55:38 crc kubenswrapper[4746]: E0129 16:55:38.544291 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66de13d9-6c00-4d6a-88a3-3fb2266d33aa" containerName="mariadb-account-create-update" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.544310 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="66de13d9-6c00-4d6a-88a3-3fb2266d33aa" containerName="mariadb-account-create-update" Jan 29 16:55:38 crc kubenswrapper[4746]: E0129 16:55:38.544344 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd76c646-881a-47e3-877a-123797edaa0e" containerName="ovn-config" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.544352 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd76c646-881a-47e3-877a-123797edaa0e" containerName="ovn-config" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.544536 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd76c646-881a-47e3-877a-123797edaa0e" containerName="ovn-config" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.544574 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="66de13d9-6c00-4d6a-88a3-3fb2266d33aa" containerName="mariadb-account-create-update" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.545566 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.549766 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.556415 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9lnns"] Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.677349 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.737239 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.737292 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-config\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.737338 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.737417 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.737466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78g5k\" (UniqueName: \"kubernetes.io/projected/aba66b06-8858-4c3c-abce-f263597324fb-kube-api-access-78g5k\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.737507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.838700 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78g5k\" (UniqueName: \"kubernetes.io/projected/aba66b06-8858-4c3c-abce-f263597324fb-kube-api-access-78g5k\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.838764 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.838850 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.838868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-config\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.838911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.838958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.839833 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.839976 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.841353 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-config\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.841967 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.842675 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.858830 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78g5k\" (UniqueName: \"kubernetes.io/projected/aba66b06-8858-4c3c-abce-f263597324fb-kube-api-access-78g5k\") pod \"dnsmasq-dns-8467b54bcc-9lnns\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:38 crc kubenswrapper[4746]: I0129 16:55:38.864344 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:39.157256 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9lnns"] Jan 29 16:55:40 crc kubenswrapper[4746]: W0129 16:55:39.159199 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba66b06_8858_4c3c_abce_f263597324fb.slice/crio-2e669ca7732f90b0b1bde42cbe929ed369d4f5897ed286275c556d39997f835f WatchSource:0}: Error finding container 2e669ca7732f90b0b1bde42cbe929ed369d4f5897ed286275c556d39997f835f: Status 404 returned error can't find the container with id 2e669ca7732f90b0b1bde42cbe929ed369d4f5897ed286275c556d39997f835f Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:39.218246 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" event={"ID":"aba66b06-8858-4c3c-abce-f263597324fb","Type":"ContainerStarted","Data":"2e669ca7732f90b0b1bde42cbe929ed369d4f5897ed286275c556d39997f835f"} Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:39.415401 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.228031 4746 generic.go:334] "Generic (PLEG): container finished" podID="aba66b06-8858-4c3c-abce-f263597324fb" containerID="40066284a95a4d9ea3228128f5d2792aac904ab47eee9df592a7b22d5021b9bb" exitCode=0 Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.228129 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" event={"ID":"aba66b06-8858-4c3c-abce-f263597324fb","Type":"ContainerDied","Data":"40066284a95a4d9ea3228128f5d2792aac904ab47eee9df592a7b22d5021b9bb"} Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.744395 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xhqj8"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.745668 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.760181 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xhqj8"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.873734 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7108-account-create-update-2rrxt"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.874972 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.883499 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-797rg"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.883574 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.884792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-797rg" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.890580 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7108-account-create-update-2rrxt"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.898283 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k5q\" (UniqueName: \"kubernetes.io/projected/b71f6b91-adc4-409b-80f6-8255c8c98f1a-kube-api-access-d2k5q\") pod \"cinder-db-create-xhqj8\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.898372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71f6b91-adc4-409b-80f6-8255c8c98f1a-operator-scripts\") pod \"cinder-db-create-xhqj8\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.900611 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-797rg"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.987278 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5505-account-create-update-mcpqs"] Jan 29 16:55:40 crc kubenswrapper[4746]: I0129 16:55:40.988462 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.000009 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.000919 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qpl\" (UniqueName: \"kubernetes.io/projected/3404f909-99a2-4dd2-b7c4-0990f400d875-kube-api-access-t7qpl\") pod \"barbican-db-create-797rg\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.000981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-operator-scripts\") pod \"barbican-7108-account-create-update-2rrxt\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.001029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3404f909-99a2-4dd2-b7c4-0990f400d875-operator-scripts\") pod \"barbican-db-create-797rg\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.001051 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hck8\" (UniqueName: \"kubernetes.io/projected/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-kube-api-access-9hck8\") pod \"barbican-7108-account-create-update-2rrxt\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.001124 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2k5q\" (UniqueName: \"kubernetes.io/projected/b71f6b91-adc4-409b-80f6-8255c8c98f1a-kube-api-access-d2k5q\") pod \"cinder-db-create-xhqj8\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.001174 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71f6b91-adc4-409b-80f6-8255c8c98f1a-operator-scripts\") pod \"cinder-db-create-xhqj8\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.002123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71f6b91-adc4-409b-80f6-8255c8c98f1a-operator-scripts\") pod \"cinder-db-create-xhqj8\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.016047 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5505-account-create-update-mcpqs"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.044786 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2k5q\" (UniqueName: \"kubernetes.io/projected/b71f6b91-adc4-409b-80f6-8255c8c98f1a-kube-api-access-d2k5q\") pod \"cinder-db-create-xhqj8\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.069226 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.103159 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3404f909-99a2-4dd2-b7c4-0990f400d875-operator-scripts\") pod \"barbican-db-create-797rg\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.103224 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hck8\" (UniqueName: \"kubernetes.io/projected/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-kube-api-access-9hck8\") pod \"barbican-7108-account-create-update-2rrxt\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.103266 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-operator-scripts\") pod \"cinder-5505-account-create-update-mcpqs\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.103395 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4mq\" (UniqueName: \"kubernetes.io/projected/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-kube-api-access-tg4mq\") pod \"cinder-5505-account-create-update-mcpqs\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.103437 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qpl\" (UniqueName: \"kubernetes.io/projected/3404f909-99a2-4dd2-b7c4-0990f400d875-kube-api-access-t7qpl\") pod \"barbican-db-create-797rg\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.103485 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-operator-scripts\") pod \"barbican-7108-account-create-update-2rrxt\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.104292 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-operator-scripts\") pod \"barbican-7108-account-create-update-2rrxt\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.105082 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3404f909-99a2-4dd2-b7c4-0990f400d875-operator-scripts\") pod \"barbican-db-create-797rg\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.125031 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hck8\" (UniqueName: \"kubernetes.io/projected/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-kube-api-access-9hck8\") pod \"barbican-7108-account-create-update-2rrxt\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.127704 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qpl\" (UniqueName: \"kubernetes.io/projected/3404f909-99a2-4dd2-b7c4-0990f400d875-kube-api-access-t7qpl\") pod \"barbican-db-create-797rg\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.160923 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-68qbv"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.162570 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.180441 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-68qbv"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.211136 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-operator-scripts\") pod \"cinder-5505-account-create-update-mcpqs\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.211482 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4mq\" (UniqueName: \"kubernetes.io/projected/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-kube-api-access-tg4mq\") pod \"cinder-5505-account-create-update-mcpqs\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.211917 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-797rg" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.212369 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.213207 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-operator-scripts\") pod \"cinder-5505-account-create-update-mcpqs\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.232571 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-tql25"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.234312 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.243774 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tql25"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.247440 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.247719 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.247912 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f9wb5" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.248079 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.266084 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4mq\" (UniqueName: \"kubernetes.io/projected/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-kube-api-access-tg4mq\") pod \"cinder-5505-account-create-update-mcpqs\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.274084 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56c7-account-create-update-cjx52"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.275559 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.276864 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" event={"ID":"aba66b06-8858-4c3c-abce-f263597324fb","Type":"ContainerStarted","Data":"e1e18334bb7b47583f796919d0747b0d3f1ab0fcf0e5c5e253129d612ce6d1d0"} Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.277596 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.277799 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.297427 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56c7-account-create-update-cjx52"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.313475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f44c462-c033-46c5-a3f8-090bb92234c8-operator-scripts\") pod \"neutron-db-create-68qbv\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.313558 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4862\" (UniqueName: \"kubernetes.io/projected/8f44c462-c033-46c5-a3f8-090bb92234c8-kube-api-access-q4862\") pod \"neutron-db-create-68qbv\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.327533 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" podStartSLOduration=3.327516171 podStartE2EDuration="3.327516171s" podCreationTimestamp="2026-01-29 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:41.314963067 +0000 UTC m=+1263.715547701" watchObservedRunningTime="2026-01-29 16:55:41.327516171 +0000 UTC m=+1263.728100815" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.330809 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.432755 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f44c462-c033-46c5-a3f8-090bb92234c8-operator-scripts\") pod \"neutron-db-create-68qbv\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.433026 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-combined-ca-bundle\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.433226 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-config-data\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.433263 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqdx\" (UniqueName: \"kubernetes.io/projected/f22df875-c939-436a-9163-28b0d53bf10c-kube-api-access-hbqdx\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.433325 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4862\" (UniqueName: \"kubernetes.io/projected/8f44c462-c033-46c5-a3f8-090bb92234c8-kube-api-access-q4862\") pod \"neutron-db-create-68qbv\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.433379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5cq\" (UniqueName: \"kubernetes.io/projected/96040470-4dec-4d11-9751-860b901ca710-kube-api-access-gk5cq\") pod \"neutron-56c7-account-create-update-cjx52\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.433473 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96040470-4dec-4d11-9751-860b901ca710-operator-scripts\") pod \"neutron-56c7-account-create-update-cjx52\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.434075 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f44c462-c033-46c5-a3f8-090bb92234c8-operator-scripts\") pod \"neutron-db-create-68qbv\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.462591 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4862\" (UniqueName: \"kubernetes.io/projected/8f44c462-c033-46c5-a3f8-090bb92234c8-kube-api-access-q4862\") pod \"neutron-db-create-68qbv\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.535805 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-config-data\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.535866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbqdx\" (UniqueName: \"kubernetes.io/projected/f22df875-c939-436a-9163-28b0d53bf10c-kube-api-access-hbqdx\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.535905 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk5cq\" (UniqueName: \"kubernetes.io/projected/96040470-4dec-4d11-9751-860b901ca710-kube-api-access-gk5cq\") pod \"neutron-56c7-account-create-update-cjx52\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.535958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96040470-4dec-4d11-9751-860b901ca710-operator-scripts\") pod \"neutron-56c7-account-create-update-cjx52\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.535998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-combined-ca-bundle\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.557525 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96040470-4dec-4d11-9751-860b901ca710-operator-scripts\") pod \"neutron-56c7-account-create-update-cjx52\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.564290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-combined-ca-bundle\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.564285 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-config-data\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.568669 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk5cq\" (UniqueName: \"kubernetes.io/projected/96040470-4dec-4d11-9751-860b901ca710-kube-api-access-gk5cq\") pod \"neutron-56c7-account-create-update-cjx52\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.568862 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbqdx\" (UniqueName: \"kubernetes.io/projected/f22df875-c939-436a-9163-28b0d53bf10c-kube-api-access-hbqdx\") pod \"keystone-db-sync-tql25\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.587045 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.599032 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xhqj8"] Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.599720 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.619765 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.774404 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7108-account-create-update-2rrxt"] Jan 29 16:55:41 crc kubenswrapper[4746]: W0129 16:55:41.790829 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2472226d_c8d4_48ef_a2da_f3dc8fd9695b.slice/crio-e8301f0cf69ba2a89b23392816c4de7b0ad0c187f8cf471c5914dfab91bd886f WatchSource:0}: Error finding container e8301f0cf69ba2a89b23392816c4de7b0ad0c187f8cf471c5914dfab91bd886f: Status 404 returned error can't find the container with id e8301f0cf69ba2a89b23392816c4de7b0ad0c187f8cf471c5914dfab91bd886f Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.791339 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-797rg"] Jan 29 16:55:41 crc kubenswrapper[4746]: W0129 16:55:41.794573 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3404f909_99a2_4dd2_b7c4_0990f400d875.slice/crio-347a6e29f7b8276463e468e5e6e345d4651033e5707144412e37acf59e9891d9 WatchSource:0}: Error finding container 347a6e29f7b8276463e468e5e6e345d4651033e5707144412e37acf59e9891d9: Status 404 returned error can't find the container with id 347a6e29f7b8276463e468e5e6e345d4651033e5707144412e37acf59e9891d9 Jan 29 16:55:41 crc kubenswrapper[4746]: I0129 16:55:41.905245 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5505-account-create-update-mcpqs"] Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.031873 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-68qbv"] Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.094824 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56c7-account-create-update-cjx52"] Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.151760 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-tql25"] Jan 29 16:55:42 crc kubenswrapper[4746]: W0129 16:55:42.158970 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf22df875_c939_436a_9163_28b0d53bf10c.slice/crio-cc8967975758c1b093c621e80c43828ef05597f67f1bdb2ec5890f7cb2250bcc WatchSource:0}: Error finding container cc8967975758c1b093c621e80c43828ef05597f67f1bdb2ec5890f7cb2250bcc: Status 404 returned error can't find the container with id cc8967975758c1b093c621e80c43828ef05597f67f1bdb2ec5890f7cb2250bcc Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.295903 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5505-account-create-update-mcpqs" event={"ID":"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a","Type":"ContainerStarted","Data":"247ac07987850938ef89b14311ebd44b3cedeffae516773cfd9ba11573533376"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.296240 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5505-account-create-update-mcpqs" event={"ID":"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a","Type":"ContainerStarted","Data":"563e175d044aafb3d1b566020c7ff3d16829fa4bb29716a4534755ca0e715d7e"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.301290 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhqj8" event={"ID":"b71f6b91-adc4-409b-80f6-8255c8c98f1a","Type":"ContainerStarted","Data":"bb90355332813dd85cb7f7decec6421abdc591007933bd11bbc0f650d9a5034b"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.301333 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhqj8" event={"ID":"b71f6b91-adc4-409b-80f6-8255c8c98f1a","Type":"ContainerStarted","Data":"b0dc69458cc6af04f7260b516677121e547ce5df93d71cc2e916f2f1299b9fce"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.302567 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68qbv" event={"ID":"8f44c462-c033-46c5-a3f8-090bb92234c8","Type":"ContainerStarted","Data":"72d904d5eb8cd4782088f0ec278f8566190a993c5b6b18087df57b25cb9aefdb"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.306350 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7108-account-create-update-2rrxt" event={"ID":"2472226d-c8d4-48ef-a2da-f3dc8fd9695b","Type":"ContainerStarted","Data":"1b15c59d49be889ef5cf6ad8a226bd7e4a9df62c00fdb98d1c98f69a191e1541"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.306401 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7108-account-create-update-2rrxt" event={"ID":"2472226d-c8d4-48ef-a2da-f3dc8fd9695b","Type":"ContainerStarted","Data":"e8301f0cf69ba2a89b23392816c4de7b0ad0c187f8cf471c5914dfab91bd886f"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.309463 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-797rg" event={"ID":"3404f909-99a2-4dd2-b7c4-0990f400d875","Type":"ContainerStarted","Data":"633a8d8450a2c8efba7958505172f6f8ef9a64dcfdd943bce08046cda4c7b216"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.309505 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-797rg" event={"ID":"3404f909-99a2-4dd2-b7c4-0990f400d875","Type":"ContainerStarted","Data":"347a6e29f7b8276463e468e5e6e345d4651033e5707144412e37acf59e9891d9"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.311272 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c7-account-create-update-cjx52" event={"ID":"96040470-4dec-4d11-9751-860b901ca710","Type":"ContainerStarted","Data":"3f5e858a0173d7e6113325131f56f384b8f3b4fa6a6ec8fd5d81ae972c30db7a"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.313494 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tql25" event={"ID":"f22df875-c939-436a-9163-28b0d53bf10c","Type":"ContainerStarted","Data":"cc8967975758c1b093c621e80c43828ef05597f67f1bdb2ec5890f7cb2250bcc"} Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.322561 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5505-account-create-update-mcpqs" podStartSLOduration=2.322542234 podStartE2EDuration="2.322542234s" podCreationTimestamp="2026-01-29 16:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:42.313842781 +0000 UTC m=+1264.714427425" watchObservedRunningTime="2026-01-29 16:55:42.322542234 +0000 UTC m=+1264.723126878" Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.350913 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7108-account-create-update-2rrxt" podStartSLOduration=2.35089002 podStartE2EDuration="2.35089002s" podCreationTimestamp="2026-01-29 16:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:42.348941478 +0000 UTC m=+1264.749526142" watchObservedRunningTime="2026-01-29 16:55:42.35089002 +0000 UTC m=+1264.751474674" Jan 29 16:55:42 crc kubenswrapper[4746]: I0129 16:55:42.352730 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-797rg" podStartSLOduration=2.3527201079999998 podStartE2EDuration="2.352720108s" podCreationTimestamp="2026-01-29 16:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:42.33251976 +0000 UTC m=+1264.733104404" watchObservedRunningTime="2026-01-29 16:55:42.352720108 +0000 UTC m=+1264.753304752" Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.325624 4746 generic.go:334] "Generic (PLEG): container finished" podID="8f44c462-c033-46c5-a3f8-090bb92234c8" containerID="06338e398c5c955cb41a316a04585286b7feffc6ce84ecf5e5ca0fbabeb65c4c" exitCode=0 Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.326427 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68qbv" event={"ID":"8f44c462-c033-46c5-a3f8-090bb92234c8","Type":"ContainerDied","Data":"06338e398c5c955cb41a316a04585286b7feffc6ce84ecf5e5ca0fbabeb65c4c"} Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.327564 4746 generic.go:334] "Generic (PLEG): container finished" podID="3404f909-99a2-4dd2-b7c4-0990f400d875" containerID="633a8d8450a2c8efba7958505172f6f8ef9a64dcfdd943bce08046cda4c7b216" exitCode=0 Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.327631 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-797rg" event={"ID":"3404f909-99a2-4dd2-b7c4-0990f400d875","Type":"ContainerDied","Data":"633a8d8450a2c8efba7958505172f6f8ef9a64dcfdd943bce08046cda4c7b216"} Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.329608 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c7-account-create-update-cjx52" event={"ID":"96040470-4dec-4d11-9751-860b901ca710","Type":"ContainerStarted","Data":"46d73f84ade9cde1a2d80ad053100a17dec9500ab890a05557340b382b890e39"} Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.334696 4746 generic.go:334] "Generic (PLEG): container finished" podID="b71f6b91-adc4-409b-80f6-8255c8c98f1a" containerID="bb90355332813dd85cb7f7decec6421abdc591007933bd11bbc0f650d9a5034b" exitCode=0 Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.337276 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhqj8" event={"ID":"b71f6b91-adc4-409b-80f6-8255c8c98f1a","Type":"ContainerDied","Data":"bb90355332813dd85cb7f7decec6421abdc591007933bd11bbc0f650d9a5034b"} Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.361383 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56c7-account-create-update-cjx52" podStartSLOduration=2.361362304 podStartE2EDuration="2.361362304s" podCreationTimestamp="2026-01-29 16:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:43.354755178 +0000 UTC m=+1265.755339822" watchObservedRunningTime="2026-01-29 16:55:43.361362304 +0000 UTC m=+1265.761946958" Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.668165 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.692142 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2k5q\" (UniqueName: \"kubernetes.io/projected/b71f6b91-adc4-409b-80f6-8255c8c98f1a-kube-api-access-d2k5q\") pod \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.692333 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71f6b91-adc4-409b-80f6-8255c8c98f1a-operator-scripts\") pod \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\" (UID: \"b71f6b91-adc4-409b-80f6-8255c8c98f1a\") " Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.693301 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71f6b91-adc4-409b-80f6-8255c8c98f1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b71f6b91-adc4-409b-80f6-8255c8c98f1a" (UID: "b71f6b91-adc4-409b-80f6-8255c8c98f1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.698943 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71f6b91-adc4-409b-80f6-8255c8c98f1a-kube-api-access-d2k5q" (OuterVolumeSpecName: "kube-api-access-d2k5q") pod "b71f6b91-adc4-409b-80f6-8255c8c98f1a" (UID: "b71f6b91-adc4-409b-80f6-8255c8c98f1a"). InnerVolumeSpecName "kube-api-access-d2k5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.794697 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b71f6b91-adc4-409b-80f6-8255c8c98f1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:43 crc kubenswrapper[4746]: I0129 16:55:43.794750 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2k5q\" (UniqueName: \"kubernetes.io/projected/b71f6b91-adc4-409b-80f6-8255c8c98f1a-kube-api-access-d2k5q\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.352722 4746 generic.go:334] "Generic (PLEG): container finished" podID="2472226d-c8d4-48ef-a2da-f3dc8fd9695b" containerID="1b15c59d49be889ef5cf6ad8a226bd7e4a9df62c00fdb98d1c98f69a191e1541" exitCode=0 Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.352793 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7108-account-create-update-2rrxt" event={"ID":"2472226d-c8d4-48ef-a2da-f3dc8fd9695b","Type":"ContainerDied","Data":"1b15c59d49be889ef5cf6ad8a226bd7e4a9df62c00fdb98d1c98f69a191e1541"} Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.355422 4746 generic.go:334] "Generic (PLEG): container finished" podID="96040470-4dec-4d11-9751-860b901ca710" containerID="46d73f84ade9cde1a2d80ad053100a17dec9500ab890a05557340b382b890e39" exitCode=0 Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.355459 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c7-account-create-update-cjx52" event={"ID":"96040470-4dec-4d11-9751-860b901ca710","Type":"ContainerDied","Data":"46d73f84ade9cde1a2d80ad053100a17dec9500ab890a05557340b382b890e39"} Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.358416 4746 generic.go:334] "Generic (PLEG): container finished" podID="dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" containerID="247ac07987850938ef89b14311ebd44b3cedeffae516773cfd9ba11573533376" exitCode=0 Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.358490 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5505-account-create-update-mcpqs" event={"ID":"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a","Type":"ContainerDied","Data":"247ac07987850938ef89b14311ebd44b3cedeffae516773cfd9ba11573533376"} Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.361469 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhqj8" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.361741 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhqj8" event={"ID":"b71f6b91-adc4-409b-80f6-8255c8c98f1a","Type":"ContainerDied","Data":"b0dc69458cc6af04f7260b516677121e547ce5df93d71cc2e916f2f1299b9fce"} Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.361776 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dc69458cc6af04f7260b516677121e547ce5df93d71cc2e916f2f1299b9fce" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.749351 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.809890 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4862\" (UniqueName: \"kubernetes.io/projected/8f44c462-c033-46c5-a3f8-090bb92234c8-kube-api-access-q4862\") pod \"8f44c462-c033-46c5-a3f8-090bb92234c8\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.810117 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f44c462-c033-46c5-a3f8-090bb92234c8-operator-scripts\") pod \"8f44c462-c033-46c5-a3f8-090bb92234c8\" (UID: \"8f44c462-c033-46c5-a3f8-090bb92234c8\") " Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.811315 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f44c462-c033-46c5-a3f8-090bb92234c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f44c462-c033-46c5-a3f8-090bb92234c8" (UID: "8f44c462-c033-46c5-a3f8-090bb92234c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.817477 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f44c462-c033-46c5-a3f8-090bb92234c8-kube-api-access-q4862" (OuterVolumeSpecName: "kube-api-access-q4862") pod "8f44c462-c033-46c5-a3f8-090bb92234c8" (UID: "8f44c462-c033-46c5-a3f8-090bb92234c8"). InnerVolumeSpecName "kube-api-access-q4862". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.911699 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4862\" (UniqueName: \"kubernetes.io/projected/8f44c462-c033-46c5-a3f8-090bb92234c8-kube-api-access-q4862\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:44 crc kubenswrapper[4746]: I0129 16:55:44.911727 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f44c462-c033-46c5-a3f8-090bb92234c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:45 crc kubenswrapper[4746]: I0129 16:55:45.383594 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qpnkt" event={"ID":"cc2d9bf4-a560-4888-bd41-01b29066a20c","Type":"ContainerStarted","Data":"0419721f2f0caa11c5b08fdd1ccc608f02549b1c0858e7d7e528265c7a907743"} Jan 29 16:55:45 crc kubenswrapper[4746]: I0129 16:55:45.388178 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68qbv" Jan 29 16:55:45 crc kubenswrapper[4746]: I0129 16:55:45.389222 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68qbv" event={"ID":"8f44c462-c033-46c5-a3f8-090bb92234c8","Type":"ContainerDied","Data":"72d904d5eb8cd4782088f0ec278f8566190a993c5b6b18087df57b25cb9aefdb"} Jan 29 16:55:45 crc kubenswrapper[4746]: I0129 16:55:45.389302 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d904d5eb8cd4782088f0ec278f8566190a993c5b6b18087df57b25cb9aefdb" Jan 29 16:55:45 crc kubenswrapper[4746]: I0129 16:55:45.414090 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qpnkt" podStartSLOduration=2.998780965 podStartE2EDuration="33.41406386s" podCreationTimestamp="2026-01-29 16:55:12 +0000 UTC" firstStartedPulling="2026-01-29 16:55:13.667954305 +0000 UTC m=+1236.068538949" lastFinishedPulling="2026-01-29 16:55:44.08323719 +0000 UTC m=+1266.483821844" observedRunningTime="2026-01-29 16:55:45.407450114 +0000 UTC m=+1267.808034758" watchObservedRunningTime="2026-01-29 16:55:45.41406386 +0000 UTC m=+1267.814648504" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.407744 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5505-account-create-update-mcpqs" event={"ID":"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a","Type":"ContainerDied","Data":"563e175d044aafb3d1b566020c7ff3d16829fa4bb29716a4534755ca0e715d7e"} Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.408047 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563e175d044aafb3d1b566020c7ff3d16829fa4bb29716a4534755ca0e715d7e" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.410297 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7108-account-create-update-2rrxt" event={"ID":"2472226d-c8d4-48ef-a2da-f3dc8fd9695b","Type":"ContainerDied","Data":"e8301f0cf69ba2a89b23392816c4de7b0ad0c187f8cf471c5914dfab91bd886f"} Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.410321 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8301f0cf69ba2a89b23392816c4de7b0ad0c187f8cf471c5914dfab91bd886f" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.412028 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-797rg" event={"ID":"3404f909-99a2-4dd2-b7c4-0990f400d875","Type":"ContainerDied","Data":"347a6e29f7b8276463e468e5e6e345d4651033e5707144412e37acf59e9891d9"} Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.412054 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347a6e29f7b8276463e468e5e6e345d4651033e5707144412e37acf59e9891d9" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.413377 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c7-account-create-update-cjx52" event={"ID":"96040470-4dec-4d11-9751-860b901ca710","Type":"ContainerDied","Data":"3f5e858a0173d7e6113325131f56f384b8f3b4fa6a6ec8fd5d81ae972c30db7a"} Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.413422 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5e858a0173d7e6113325131f56f384b8f3b4fa6a6ec8fd5d81ae972c30db7a" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.572623 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.579690 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-797rg" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.612827 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.625417 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.669521 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-operator-scripts\") pod \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.669592 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hck8\" (UniqueName: \"kubernetes.io/projected/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-kube-api-access-9hck8\") pod \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\" (UID: \"2472226d-c8d4-48ef-a2da-f3dc8fd9695b\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.669648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3404f909-99a2-4dd2-b7c4-0990f400d875-operator-scripts\") pod \"3404f909-99a2-4dd2-b7c4-0990f400d875\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.669773 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qpl\" (UniqueName: \"kubernetes.io/projected/3404f909-99a2-4dd2-b7c4-0990f400d875-kube-api-access-t7qpl\") pod \"3404f909-99a2-4dd2-b7c4-0990f400d875\" (UID: \"3404f909-99a2-4dd2-b7c4-0990f400d875\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.669809 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96040470-4dec-4d11-9751-860b901ca710-operator-scripts\") pod \"96040470-4dec-4d11-9751-860b901ca710\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.669845 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-operator-scripts\") pod \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.670000 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk5cq\" (UniqueName: \"kubernetes.io/projected/96040470-4dec-4d11-9751-860b901ca710-kube-api-access-gk5cq\") pod \"96040470-4dec-4d11-9751-860b901ca710\" (UID: \"96040470-4dec-4d11-9751-860b901ca710\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.670047 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg4mq\" (UniqueName: \"kubernetes.io/projected/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-kube-api-access-tg4mq\") pod \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\" (UID: \"dadcfd7a-71a2-405b-8487-0dedc7cf9b6a\") " Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.670260 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2472226d-c8d4-48ef-a2da-f3dc8fd9695b" (UID: "2472226d-c8d4-48ef-a2da-f3dc8fd9695b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.670775 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.671085 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" (UID: "dadcfd7a-71a2-405b-8487-0dedc7cf9b6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.671397 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96040470-4dec-4d11-9751-860b901ca710-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96040470-4dec-4d11-9751-860b901ca710" (UID: "96040470-4dec-4d11-9751-860b901ca710"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.672075 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3404f909-99a2-4dd2-b7c4-0990f400d875-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3404f909-99a2-4dd2-b7c4-0990f400d875" (UID: "3404f909-99a2-4dd2-b7c4-0990f400d875"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.681873 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3404f909-99a2-4dd2-b7c4-0990f400d875-kube-api-access-t7qpl" (OuterVolumeSpecName: "kube-api-access-t7qpl") pod "3404f909-99a2-4dd2-b7c4-0990f400d875" (UID: "3404f909-99a2-4dd2-b7c4-0990f400d875"). InnerVolumeSpecName "kube-api-access-t7qpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.681943 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-kube-api-access-9hck8" (OuterVolumeSpecName: "kube-api-access-9hck8") pod "2472226d-c8d4-48ef-a2da-f3dc8fd9695b" (UID: "2472226d-c8d4-48ef-a2da-f3dc8fd9695b"). InnerVolumeSpecName "kube-api-access-9hck8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.682363 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-kube-api-access-tg4mq" (OuterVolumeSpecName: "kube-api-access-tg4mq") pod "dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" (UID: "dadcfd7a-71a2-405b-8487-0dedc7cf9b6a"). InnerVolumeSpecName "kube-api-access-tg4mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.683339 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96040470-4dec-4d11-9751-860b901ca710-kube-api-access-gk5cq" (OuterVolumeSpecName: "kube-api-access-gk5cq") pod "96040470-4dec-4d11-9751-860b901ca710" (UID: "96040470-4dec-4d11-9751-860b901ca710"). InnerVolumeSpecName "kube-api-access-gk5cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772170 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk5cq\" (UniqueName: \"kubernetes.io/projected/96040470-4dec-4d11-9751-860b901ca710-kube-api-access-gk5cq\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772237 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg4mq\" (UniqueName: \"kubernetes.io/projected/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-kube-api-access-tg4mq\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772254 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hck8\" (UniqueName: \"kubernetes.io/projected/2472226d-c8d4-48ef-a2da-f3dc8fd9695b-kube-api-access-9hck8\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772267 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3404f909-99a2-4dd2-b7c4-0990f400d875-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772280 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qpl\" (UniqueName: \"kubernetes.io/projected/3404f909-99a2-4dd2-b7c4-0990f400d875-kube-api-access-t7qpl\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772291 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96040470-4dec-4d11-9751-860b901ca710-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:47 crc kubenswrapper[4746]: I0129 16:55:47.772301 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.423247 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5505-account-create-update-mcpqs" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.432446 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tql25" event={"ID":"f22df875-c939-436a-9163-28b0d53bf10c","Type":"ContainerStarted","Data":"ee13c42a317c4342e1606cf6ab0b6c008e98b7b4b966ff9748f1e45ad9609fae"} Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.432511 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-797rg" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.433941 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c7-account-create-update-cjx52" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.435358 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7108-account-create-update-2rrxt" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.453676 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-tql25" podStartSLOduration=2.184034224 podStartE2EDuration="7.453660992s" podCreationTimestamp="2026-01-29 16:55:41 +0000 UTC" firstStartedPulling="2026-01-29 16:55:42.164395455 +0000 UTC m=+1264.564980099" lastFinishedPulling="2026-01-29 16:55:47.434022213 +0000 UTC m=+1269.834606867" observedRunningTime="2026-01-29 16:55:48.449526192 +0000 UTC m=+1270.850110846" watchObservedRunningTime="2026-01-29 16:55:48.453660992 +0000 UTC m=+1270.854245636" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.867561 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.940302 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-wmcfl"] Jan 29 16:55:48 crc kubenswrapper[4746]: I0129 16:55:48.940575 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerName="dnsmasq-dns" containerID="cri-o://8e139f6609870c3d06003df1422423fde0513fe9d6cdedd81297a493b0a2a9e1" gracePeriod=10 Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.064797 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.064851 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.437022 4746 generic.go:334] "Generic (PLEG): container finished" podID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerID="8e139f6609870c3d06003df1422423fde0513fe9d6cdedd81297a493b0a2a9e1" exitCode=0 Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.437116 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" event={"ID":"f09729b5-cce8-4671-b19d-e8fb14ad533c","Type":"ContainerDied","Data":"8e139f6609870c3d06003df1422423fde0513fe9d6cdedd81297a493b0a2a9e1"} Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.554394 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.606678 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-config\") pod \"f09729b5-cce8-4671-b19d-e8fb14ad533c\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.606805 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgwdb\" (UniqueName: \"kubernetes.io/projected/f09729b5-cce8-4671-b19d-e8fb14ad533c-kube-api-access-dgwdb\") pod \"f09729b5-cce8-4671-b19d-e8fb14ad533c\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.606863 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-sb\") pod \"f09729b5-cce8-4671-b19d-e8fb14ad533c\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.606923 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-nb\") pod \"f09729b5-cce8-4671-b19d-e8fb14ad533c\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.607753 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-dns-svc\") pod \"f09729b5-cce8-4671-b19d-e8fb14ad533c\" (UID: \"f09729b5-cce8-4671-b19d-e8fb14ad533c\") " Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.614550 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09729b5-cce8-4671-b19d-e8fb14ad533c-kube-api-access-dgwdb" (OuterVolumeSpecName: "kube-api-access-dgwdb") pod "f09729b5-cce8-4671-b19d-e8fb14ad533c" (UID: "f09729b5-cce8-4671-b19d-e8fb14ad533c"). InnerVolumeSpecName "kube-api-access-dgwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.653854 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f09729b5-cce8-4671-b19d-e8fb14ad533c" (UID: "f09729b5-cce8-4671-b19d-e8fb14ad533c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.663254 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f09729b5-cce8-4671-b19d-e8fb14ad533c" (UID: "f09729b5-cce8-4671-b19d-e8fb14ad533c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.663713 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-config" (OuterVolumeSpecName: "config") pod "f09729b5-cce8-4671-b19d-e8fb14ad533c" (UID: "f09729b5-cce8-4671-b19d-e8fb14ad533c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.664575 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f09729b5-cce8-4671-b19d-e8fb14ad533c" (UID: "f09729b5-cce8-4671-b19d-e8fb14ad533c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.710332 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.710376 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.710390 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.710404 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f09729b5-cce8-4671-b19d-e8fb14ad533c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:49 crc kubenswrapper[4746]: I0129 16:55:49.710417 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgwdb\" (UniqueName: \"kubernetes.io/projected/f09729b5-cce8-4671-b19d-e8fb14ad533c-kube-api-access-dgwdb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:50 crc kubenswrapper[4746]: I0129 16:55:50.457638 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" Jan 29 16:55:50 crc kubenswrapper[4746]: I0129 16:55:50.470577 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-wmcfl" event={"ID":"f09729b5-cce8-4671-b19d-e8fb14ad533c","Type":"ContainerDied","Data":"dcf3c0deda34111f192dfa67ed2a547ecbedb1bfabb2da5155f027277a706b5a"} Jan 29 16:55:50 crc kubenswrapper[4746]: I0129 16:55:50.470643 4746 scope.go:117] "RemoveContainer" containerID="8e139f6609870c3d06003df1422423fde0513fe9d6cdedd81297a493b0a2a9e1" Jan 29 16:55:50 crc kubenswrapper[4746]: I0129 16:55:50.498914 4746 scope.go:117] "RemoveContainer" containerID="edea334f335f6124118177c4d5d75bd78483607fab374edae370a9ffd740274e" Jan 29 16:55:50 crc kubenswrapper[4746]: I0129 16:55:50.511243 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-wmcfl"] Jan 29 16:55:50 crc kubenswrapper[4746]: I0129 16:55:50.518403 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-wmcfl"] Jan 29 16:55:51 crc kubenswrapper[4746]: I0129 16:55:51.472048 4746 generic.go:334] "Generic (PLEG): container finished" podID="f22df875-c939-436a-9163-28b0d53bf10c" containerID="ee13c42a317c4342e1606cf6ab0b6c008e98b7b4b966ff9748f1e45ad9609fae" exitCode=0 Jan 29 16:55:51 crc kubenswrapper[4746]: I0129 16:55:51.472592 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tql25" event={"ID":"f22df875-c939-436a-9163-28b0d53bf10c","Type":"ContainerDied","Data":"ee13c42a317c4342e1606cf6ab0b6c008e98b7b4b966ff9748f1e45ad9609fae"} Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.456291 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" path="/var/lib/kubelet/pods/f09729b5-cce8-4671-b19d-e8fb14ad533c/volumes" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.778833 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.862800 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbqdx\" (UniqueName: \"kubernetes.io/projected/f22df875-c939-436a-9163-28b0d53bf10c-kube-api-access-hbqdx\") pod \"f22df875-c939-436a-9163-28b0d53bf10c\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.862876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-config-data\") pod \"f22df875-c939-436a-9163-28b0d53bf10c\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.862989 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-combined-ca-bundle\") pod \"f22df875-c939-436a-9163-28b0d53bf10c\" (UID: \"f22df875-c939-436a-9163-28b0d53bf10c\") " Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.868930 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22df875-c939-436a-9163-28b0d53bf10c-kube-api-access-hbqdx" (OuterVolumeSpecName: "kube-api-access-hbqdx") pod "f22df875-c939-436a-9163-28b0d53bf10c" (UID: "f22df875-c939-436a-9163-28b0d53bf10c"). InnerVolumeSpecName "kube-api-access-hbqdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.888885 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f22df875-c939-436a-9163-28b0d53bf10c" (UID: "f22df875-c939-436a-9163-28b0d53bf10c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.906095 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-config-data" (OuterVolumeSpecName: "config-data") pod "f22df875-c939-436a-9163-28b0d53bf10c" (UID: "f22df875-c939-436a-9163-28b0d53bf10c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.969123 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.969393 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbqdx\" (UniqueName: \"kubernetes.io/projected/f22df875-c939-436a-9163-28b0d53bf10c-kube-api-access-hbqdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:52 crc kubenswrapper[4746]: I0129 16:55:52.969453 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22df875-c939-436a-9163-28b0d53bf10c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.507845 4746 generic.go:334] "Generic (PLEG): container finished" podID="cc2d9bf4-a560-4888-bd41-01b29066a20c" containerID="0419721f2f0caa11c5b08fdd1ccc608f02549b1c0858e7d7e528265c7a907743" exitCode=0 Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.507953 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qpnkt" event={"ID":"cc2d9bf4-a560-4888-bd41-01b29066a20c","Type":"ContainerDied","Data":"0419721f2f0caa11c5b08fdd1ccc608f02549b1c0858e7d7e528265c7a907743"} Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.509987 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-tql25" event={"ID":"f22df875-c939-436a-9163-28b0d53bf10c","Type":"ContainerDied","Data":"cc8967975758c1b093c621e80c43828ef05597f67f1bdb2ec5890f7cb2250bcc"} Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.510013 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8967975758c1b093c621e80c43828ef05597f67f1bdb2ec5890f7cb2250bcc" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.510054 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-tql25" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.756937 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-6tpvp"] Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757400 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757425 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757446 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2472226d-c8d4-48ef-a2da-f3dc8fd9695b" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757455 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2472226d-c8d4-48ef-a2da-f3dc8fd9695b" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757472 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerName="init" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757480 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerName="init" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757497 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96040470-4dec-4d11-9751-860b901ca710" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757506 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="96040470-4dec-4d11-9751-860b901ca710" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757518 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71f6b91-adc4-409b-80f6-8255c8c98f1a" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757526 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71f6b91-adc4-409b-80f6-8255c8c98f1a" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757538 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22df875-c939-436a-9163-28b0d53bf10c" containerName="keystone-db-sync" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757546 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22df875-c939-436a-9163-28b0d53bf10c" containerName="keystone-db-sync" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757560 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f44c462-c033-46c5-a3f8-090bb92234c8" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757569 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f44c462-c033-46c5-a3f8-090bb92234c8" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757583 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerName="dnsmasq-dns" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757590 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerName="dnsmasq-dns" Jan 29 16:55:53 crc kubenswrapper[4746]: E0129 16:55:53.757608 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3404f909-99a2-4dd2-b7c4-0990f400d875" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757616 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3404f909-99a2-4dd2-b7c4-0990f400d875" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757829 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f44c462-c033-46c5-a3f8-090bb92234c8" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757845 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3404f909-99a2-4dd2-b7c4-0990f400d875" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757860 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09729b5-cce8-4671-b19d-e8fb14ad533c" containerName="dnsmasq-dns" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757876 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="96040470-4dec-4d11-9751-860b901ca710" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757889 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71f6b91-adc4-409b-80f6-8255c8c98f1a" containerName="mariadb-database-create" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757900 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2472226d-c8d4-48ef-a2da-f3dc8fd9695b" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757908 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" containerName="mariadb-account-create-update" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.757921 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22df875-c939-436a-9163-28b0d53bf10c" containerName="keystone-db-sync" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.759129 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.777194 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-6tpvp"] Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.784294 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nbcbj"] Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.789596 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.791495 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.792230 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.792509 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f9wb5" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.792513 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.792642 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.818080 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nbcbj"] Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884008 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-swift-storage-0\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884055 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8np\" (UniqueName: \"kubernetes.io/projected/aba2ef00-f69e-4357-b228-2bdb918bbaa3-kube-api-access-xb8np\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884076 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlsxq\" (UniqueName: \"kubernetes.io/projected/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-kube-api-access-mlsxq\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884093 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-scripts\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884329 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-config-data\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884412 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-svc\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884439 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-fernet-keys\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884537 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-nb\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884582 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-sb\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884620 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-combined-ca-bundle\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884647 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-credential-keys\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.884862 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-config\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.957674 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ls92k"] Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.958638 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.961142 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.961887 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.963142 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ct2ps" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.976100 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ls92k"] Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.985845 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-scripts\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986556 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57s2\" (UniqueName: \"kubernetes.io/projected/5a81565e-25dc-4269-8e78-c953acef207b-kube-api-access-q57s2\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986596 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-swift-storage-0\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986623 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb8np\" (UniqueName: \"kubernetes.io/projected/aba2ef00-f69e-4357-b228-2bdb918bbaa3-kube-api-access-xb8np\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986646 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlsxq\" (UniqueName: \"kubernetes.io/projected/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-kube-api-access-mlsxq\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986665 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-scripts\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986693 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-config-data\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986713 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-config-data\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986737 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-svc\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986779 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-fernet-keys\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.986916 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-combined-ca-bundle\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-nb\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-sb\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987301 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a81565e-25dc-4269-8e78-c953acef207b-etc-machine-id\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987361 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-combined-ca-bundle\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987393 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-credential-keys\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-db-sync-config-data\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987559 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-config\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987657 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-swift-storage-0\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.987726 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-svc\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.988004 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-nb\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.988974 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-sb\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:53 crc kubenswrapper[4746]: I0129 16:55:53.989071 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-config\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:53.996416 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-credential-keys\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:53.997242 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:53.997330 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-combined-ca-bundle\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:53.999339 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:53.999835 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.003135 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-config-data\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.009279 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlsxq\" (UniqueName: \"kubernetes.io/projected/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-kube-api-access-mlsxq\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.014624 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-fernet-keys\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.021651 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-scripts\") pod \"keystone-bootstrap-nbcbj\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.027401 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb8np\" (UniqueName: \"kubernetes.io/projected/aba2ef00-f69e-4357-b228-2bdb918bbaa3-kube-api-access-xb8np\") pod \"dnsmasq-dns-58647bbf65-6tpvp\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.037138 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.057172 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-w69t8"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.058340 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.063538 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.063752 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.063929 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-h4x88" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.074782 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089581 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-config-data\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089666 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-combined-ca-bundle\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089704 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-combined-ca-bundle\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089748 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a81565e-25dc-4269-8e78-c953acef207b-etc-machine-id\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089769 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-scripts\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089793 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-config-data\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089819 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kglrq\" (UniqueName: \"kubernetes.io/projected/caba25c4-6465-4ebd-9075-bb6c9806e8ea-kube-api-access-kglrq\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089848 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-log-httpd\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089874 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-config\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089902 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-db-sync-config-data\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089942 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-run-httpd\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.089979 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.090010 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfwz\" (UniqueName: \"kubernetes.io/projected/ed0634b6-22e2-4042-a738-45efb60d6c87-kube-api-access-2sfwz\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.090036 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-scripts\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.090059 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57s2\" (UniqueName: \"kubernetes.io/projected/5a81565e-25dc-4269-8e78-c953acef207b-kube-api-access-q57s2\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.090085 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.101403 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a81565e-25dc-4269-8e78-c953acef207b-etc-machine-id\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.108201 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-w69t8"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.108710 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-scripts\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.109104 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.127635 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-combined-ca-bundle\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.129453 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-config-data\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.135862 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-db-sync-config-data\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.157274 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-d2tsk"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.158610 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.162418 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57s2\" (UniqueName: \"kubernetes.io/projected/5a81565e-25dc-4269-8e78-c953acef207b-kube-api-access-q57s2\") pod \"cinder-db-sync-ls92k\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.184821 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-scl5x" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.185722 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kglrq\" (UniqueName: \"kubernetes.io/projected/caba25c4-6465-4ebd-9075-bb6c9806e8ea-kube-api-access-kglrq\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203512 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-log-httpd\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203562 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-config\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203657 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-run-httpd\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203715 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203755 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfwz\" (UniqueName: \"kubernetes.io/projected/ed0634b6-22e2-4042-a738-45efb60d6c87-kube-api-access-2sfwz\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203796 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.203922 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-combined-ca-bundle\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.204003 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-scripts\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.204034 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-config-data\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.208725 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-log-httpd\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.215930 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-run-httpd\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.216523 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-scripts\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.231023 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.235968 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-combined-ca-bundle\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.237831 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-config-data\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.239368 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kglrq\" (UniqueName: \"kubernetes.io/projected/caba25c4-6465-4ebd-9075-bb6c9806e8ea-kube-api-access-kglrq\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.239417 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-config\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.244451 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.249523 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d2tsk"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.250697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfwz\" (UniqueName: \"kubernetes.io/projected/ed0634b6-22e2-4042-a738-45efb60d6c87-kube-api-access-2sfwz\") pod \"neutron-db-sync-w69t8\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.268898 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w69t8" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.272671 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ls92k" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.300550 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w422g"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.302729 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.305561 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zqz9p" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.305805 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.305956 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.309894 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-db-sync-config-data\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.309959 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9zr6\" (UniqueName: \"kubernetes.io/projected/abc85a95-136d-4ffe-97ab-adea84894a76-kube-api-access-z9zr6\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.310030 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-combined-ca-bundle\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.355868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w422g"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.378873 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-6tpvp"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.394417 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-wsfrq"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.397546 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.413037 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-sb\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.415255 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h889t\" (UniqueName: \"kubernetes.io/projected/380b315f-5021-4a7c-892b-99545fb9c5cd-kube-api-access-h889t\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.415736 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-svc\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.415851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-config-data\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.419108 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/380b315f-5021-4a7c-892b-99545fb9c5cd-logs\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.417460 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-wsfrq"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.423033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spgk\" (UniqueName: \"kubernetes.io/projected/971c052f-f64b-4615-a2b5-75ed777b146d-kube-api-access-7spgk\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.423243 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-scripts\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.423351 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-db-sync-config-data\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.423474 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-swift-storage-0\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.423564 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-config\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.424163 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-combined-ca-bundle\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.424333 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9zr6\" (UniqueName: \"kubernetes.io/projected/abc85a95-136d-4ffe-97ab-adea84894a76-kube-api-access-z9zr6\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.424933 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-nb\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.425057 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-combined-ca-bundle\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.430134 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-db-sync-config-data\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.432450 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-combined-ca-bundle\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.461843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9zr6\" (UniqueName: \"kubernetes.io/projected/abc85a95-136d-4ffe-97ab-adea84894a76-kube-api-access-z9zr6\") pod \"barbican-db-sync-d2tsk\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527073 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-scripts\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527132 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-swift-storage-0\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527149 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-config\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527163 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-combined-ca-bundle\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527229 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-nb\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527259 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-sb\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527292 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h889t\" (UniqueName: \"kubernetes.io/projected/380b315f-5021-4a7c-892b-99545fb9c5cd-kube-api-access-h889t\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527358 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-svc\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527380 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-config-data\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527397 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/380b315f-5021-4a7c-892b-99545fb9c5cd-logs\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.527430 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spgk\" (UniqueName: \"kubernetes.io/projected/971c052f-f64b-4615-a2b5-75ed777b146d-kube-api-access-7spgk\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.528459 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-config\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.528712 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-nb\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.528985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/380b315f-5021-4a7c-892b-99545fb9c5cd-logs\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.529074 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-sb\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.529634 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-swift-storage-0\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.529933 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-svc\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.537874 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.540280 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-combined-ca-bundle\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.541101 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-config-data\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.543660 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-scripts\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.547584 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spgk\" (UniqueName: \"kubernetes.io/projected/971c052f-f64b-4615-a2b5-75ed777b146d-kube-api-access-7spgk\") pod \"dnsmasq-dns-fd458c8cc-wsfrq\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.547628 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h889t\" (UniqueName: \"kubernetes.io/projected/380b315f-5021-4a7c-892b-99545fb9c5cd-kube-api-access-h889t\") pod \"placement-db-sync-w422g\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.578714 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.635763 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w422g" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.755921 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.809169 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nbcbj"] Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.819768 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-6tpvp"] Jan 29 16:55:54 crc kubenswrapper[4746]: W0129 16:55:54.828441 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaba2ef00_f69e_4357_b228_2bdb918bbaa3.slice/crio-e17f267909f7e5c10aa33ea241acd8b0ae789995e4a7ea1d195c6cd4747a0567 WatchSource:0}: Error finding container e17f267909f7e5c10aa33ea241acd8b0ae789995e4a7ea1d195c6cd4747a0567: Status 404 returned error can't find the container with id e17f267909f7e5c10aa33ea241acd8b0ae789995e4a7ea1d195c6cd4747a0567 Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.922965 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ls92k"] Jan 29 16:55:54 crc kubenswrapper[4746]: W0129 16:55:54.929341 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a81565e_25dc_4269_8e78_c953acef207b.slice/crio-ab1458a5d2a1c7420a9492c3696225a9fddc000e30822caf4eb45f9b8c1bdf6e WatchSource:0}: Error finding container ab1458a5d2a1c7420a9492c3696225a9fddc000e30822caf4eb45f9b8c1bdf6e: Status 404 returned error can't find the container with id ab1458a5d2a1c7420a9492c3696225a9fddc000e30822caf4eb45f9b8c1bdf6e Jan 29 16:55:54 crc kubenswrapper[4746]: I0129 16:55:54.998160 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-w69t8"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.143880 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.385236 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.440487 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhbw8\" (UniqueName: \"kubernetes.io/projected/cc2d9bf4-a560-4888-bd41-01b29066a20c-kube-api-access-bhbw8\") pod \"cc2d9bf4-a560-4888-bd41-01b29066a20c\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.440539 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-combined-ca-bundle\") pod \"cc2d9bf4-a560-4888-bd41-01b29066a20c\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.440621 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-config-data\") pod \"cc2d9bf4-a560-4888-bd41-01b29066a20c\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.440729 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-db-sync-config-data\") pod \"cc2d9bf4-a560-4888-bd41-01b29066a20c\" (UID: \"cc2d9bf4-a560-4888-bd41-01b29066a20c\") " Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.446338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc2d9bf4-a560-4888-bd41-01b29066a20c-kube-api-access-bhbw8" (OuterVolumeSpecName: "kube-api-access-bhbw8") pod "cc2d9bf4-a560-4888-bd41-01b29066a20c" (UID: "cc2d9bf4-a560-4888-bd41-01b29066a20c"). InnerVolumeSpecName "kube-api-access-bhbw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.447951 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc2d9bf4-a560-4888-bd41-01b29066a20c" (UID: "cc2d9bf4-a560-4888-bd41-01b29066a20c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.486071 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc2d9bf4-a560-4888-bd41-01b29066a20c" (UID: "cc2d9bf4-a560-4888-bd41-01b29066a20c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.492476 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-d2tsk"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.506838 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w422g"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.513683 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-wsfrq"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.523261 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-config-data" (OuterVolumeSpecName: "config-data") pod "cc2d9bf4-a560-4888-bd41-01b29066a20c" (UID: "cc2d9bf4-a560-4888-bd41-01b29066a20c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.535605 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qpnkt" event={"ID":"cc2d9bf4-a560-4888-bd41-01b29066a20c","Type":"ContainerDied","Data":"577306c93bda4270714b5d4c0060959819360229049f414d903ed3c9553b47f4"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.535664 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="577306c93bda4270714b5d4c0060959819360229049f414d903ed3c9553b47f4" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.535721 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qpnkt" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.537831 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ls92k" event={"ID":"5a81565e-25dc-4269-8e78-c953acef207b","Type":"ContainerStarted","Data":"ab1458a5d2a1c7420a9492c3696225a9fddc000e30822caf4eb45f9b8c1bdf6e"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.539717 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerStarted","Data":"d7abf523ea31cfc54538d84e3b6a05f72d92a7ec08ec4b4009a4c23ee30fd258"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.542231 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.542262 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhbw8\" (UniqueName: \"kubernetes.io/projected/cc2d9bf4-a560-4888-bd41-01b29066a20c-kube-api-access-bhbw8\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.542276 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.542288 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc2d9bf4-a560-4888-bd41-01b29066a20c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.543254 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nbcbj" event={"ID":"7cb85237-9d3b-4f95-b034-f1cd4dffb55c","Type":"ContainerStarted","Data":"6028bd65c0bf0a94f233aec3e6e88f7ae6ed4b7c4612d538e05667c2886c0e7d"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.545322 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2tsk" event={"ID":"abc85a95-136d-4ffe-97ab-adea84894a76","Type":"ContainerStarted","Data":"b2428c3fdfd6303eab1521a4fbf6f1610cbf1f2bf37408c2645b1099d3e66140"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.546454 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" event={"ID":"aba2ef00-f69e-4357-b228-2bdb918bbaa3","Type":"ContainerStarted","Data":"e17f267909f7e5c10aa33ea241acd8b0ae789995e4a7ea1d195c6cd4747a0567"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.548468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w69t8" event={"ID":"ed0634b6-22e2-4042-a738-45efb60d6c87","Type":"ContainerStarted","Data":"203a8fa5f103eb76275cfefbdc5e77d3d5e12a9f1bc346b7d1743f8b17763838"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.549772 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w422g" event={"ID":"380b315f-5021-4a7c-892b-99545fb9c5cd","Type":"ContainerStarted","Data":"42706d3194d1621a7fdf550bf1c651b0865398b170afdaefc2901b587920904e"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.560400 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" event={"ID":"971c052f-f64b-4615-a2b5-75ed777b146d","Type":"ContainerStarted","Data":"093a5d57c225bb8a99bbbef36d761e1350c710ff0e9a3bf90e2a8813723d21dd"} Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.936431 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-wsfrq"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.956425 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.976615 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-m4rn8"] Jan 29 16:55:55 crc kubenswrapper[4746]: E0129 16:55:55.977027 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2d9bf4-a560-4888-bd41-01b29066a20c" containerName="glance-db-sync" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.977040 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2d9bf4-a560-4888-bd41-01b29066a20c" containerName="glance-db-sync" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.977205 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc2d9bf4-a560-4888-bd41-01b29066a20c" containerName="glance-db-sync" Jan 29 16:55:55 crc kubenswrapper[4746]: I0129 16:55:55.978026 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.007518 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-m4rn8"] Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.054805 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-config\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.054866 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.054901 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.054949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.054977 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.055021 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j56tq\" (UniqueName: \"kubernetes.io/projected/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-kube-api-access-j56tq\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.065608 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.069407 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.081393 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.098020 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zpldn" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.098123 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.176969 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.202453 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.202581 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j56tq\" (UniqueName: \"kubernetes.io/projected/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-kube-api-access-j56tq\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.202755 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-config\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.202782 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.202836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.202911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.205326 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.213107 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.213141 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.216964 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-config\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.217511 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.238668 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j56tq\" (UniqueName: \"kubernetes.io/projected/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-kube-api-access-j56tq\") pod \"dnsmasq-dns-5dc4fcdbc-m4rn8\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304285 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304381 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304413 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgkkt\" (UniqueName: \"kubernetes.io/projected/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-kube-api-access-tgkkt\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-logs\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304553 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.304576 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.351802 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407067 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407117 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407168 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407217 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407243 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407262 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgkkt\" (UniqueName: \"kubernetes.io/projected/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-kube-api-access-tgkkt\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.407760 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-logs\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.408328 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-logs\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.409998 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.414371 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.415778 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.416011 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.418949 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.439434 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgkkt\" (UniqueName: \"kubernetes.io/projected/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-kube-api-access-tgkkt\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.486271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.599261 4746 generic.go:334] "Generic (PLEG): container finished" podID="971c052f-f64b-4615-a2b5-75ed777b146d" containerID="f72e5836a97820176942ab652564e336dbe357701ce87245fd28eda9f194b0ec" exitCode=0 Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.599317 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" event={"ID":"971c052f-f64b-4615-a2b5-75ed777b146d","Type":"ContainerDied","Data":"f72e5836a97820176942ab652564e336dbe357701ce87245fd28eda9f194b0ec"} Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.629427 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nbcbj" event={"ID":"7cb85237-9d3b-4f95-b034-f1cd4dffb55c","Type":"ContainerStarted","Data":"564993abe23f748bcad00bc227395a3a07b6f9bffbb87815e97f254c228f5be2"} Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.640929 4746 generic.go:334] "Generic (PLEG): container finished" podID="aba2ef00-f69e-4357-b228-2bdb918bbaa3" containerID="fe6c3a21aae92f79e2a879b2b5d28fa94686fc1443b7a47cc9fddc7382f7cdb6" exitCode=0 Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.641072 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" event={"ID":"aba2ef00-f69e-4357-b228-2bdb918bbaa3","Type":"ContainerDied","Data":"fe6c3a21aae92f79e2a879b2b5d28fa94686fc1443b7a47cc9fddc7382f7cdb6"} Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.646045 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w69t8" event={"ID":"ed0634b6-22e2-4042-a738-45efb60d6c87","Type":"ContainerStarted","Data":"0eca1ecffcb158ea772d21b15b7870aa8539dd40c4ec7be1285ce85180bf1e8a"} Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.663960 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nbcbj" podStartSLOduration=3.6639436119999997 podStartE2EDuration="3.663943612s" podCreationTimestamp="2026-01-29 16:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:56.656987065 +0000 UTC m=+1279.057571709" watchObservedRunningTime="2026-01-29 16:55:56.663943612 +0000 UTC m=+1279.064528256" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.733792 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-w69t8" podStartSLOduration=2.733767854 podStartE2EDuration="2.733767854s" podCreationTimestamp="2026-01-29 16:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:56.710078252 +0000 UTC m=+1279.110662896" watchObservedRunningTime="2026-01-29 16:55:56.733767854 +0000 UTC m=+1279.134352498" Jan 29 16:55:56 crc kubenswrapper[4746]: I0129 16:55:56.735455 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:55:56 crc kubenswrapper[4746]: E0129 16:55:56.773814 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod971c052f_f64b_4615_a2b5_75ed777b146d.slice/crio-conmon-f72e5836a97820176942ab652564e336dbe357701ce87245fd28eda9f194b0ec.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.020448 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-m4rn8"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.165446 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.167423 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.171373 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.184180 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.222984 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.230848 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324144 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-svc\") pod \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324218 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spgk\" (UniqueName: \"kubernetes.io/projected/971c052f-f64b-4615-a2b5-75ed777b146d-kube-api-access-7spgk\") pod \"971c052f-f64b-4615-a2b5-75ed777b146d\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324337 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-sb\") pod \"971c052f-f64b-4615-a2b5-75ed777b146d\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324375 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-config\") pod \"971c052f-f64b-4615-a2b5-75ed777b146d\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324437 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-nb\") pod \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324463 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-swift-storage-0\") pod \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324503 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb8np\" (UniqueName: \"kubernetes.io/projected/aba2ef00-f69e-4357-b228-2bdb918bbaa3-kube-api-access-xb8np\") pod \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324527 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-sb\") pod \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324791 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-swift-storage-0\") pod \"971c052f-f64b-4615-a2b5-75ed777b146d\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324841 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-svc\") pod \"971c052f-f64b-4615-a2b5-75ed777b146d\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324890 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-nb\") pod \"971c052f-f64b-4615-a2b5-75ed777b146d\" (UID: \"971c052f-f64b-4615-a2b5-75ed777b146d\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.324938 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-config\") pod \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\" (UID: \"aba2ef00-f69e-4357-b228-2bdb918bbaa3\") " Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.325992 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.326048 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.326077 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.326155 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5hw\" (UniqueName: \"kubernetes.io/projected/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-kube-api-access-kg5hw\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.326210 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.326246 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.326404 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.333558 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba2ef00-f69e-4357-b228-2bdb918bbaa3-kube-api-access-xb8np" (OuterVolumeSpecName: "kube-api-access-xb8np") pod "aba2ef00-f69e-4357-b228-2bdb918bbaa3" (UID: "aba2ef00-f69e-4357-b228-2bdb918bbaa3"). InnerVolumeSpecName "kube-api-access-xb8np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.355842 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-config" (OuterVolumeSpecName: "config") pod "971c052f-f64b-4615-a2b5-75ed777b146d" (UID: "971c052f-f64b-4615-a2b5-75ed777b146d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.378839 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "971c052f-f64b-4615-a2b5-75ed777b146d" (UID: "971c052f-f64b-4615-a2b5-75ed777b146d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.382682 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aba2ef00-f69e-4357-b228-2bdb918bbaa3" (UID: "aba2ef00-f69e-4357-b228-2bdb918bbaa3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.386282 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971c052f-f64b-4615-a2b5-75ed777b146d-kube-api-access-7spgk" (OuterVolumeSpecName: "kube-api-access-7spgk") pod "971c052f-f64b-4615-a2b5-75ed777b146d" (UID: "971c052f-f64b-4615-a2b5-75ed777b146d"). InnerVolumeSpecName "kube-api-access-7spgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.402121 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "971c052f-f64b-4615-a2b5-75ed777b146d" (UID: "971c052f-f64b-4615-a2b5-75ed777b146d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.408411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aba2ef00-f69e-4357-b228-2bdb918bbaa3" (UID: "aba2ef00-f69e-4357-b228-2bdb918bbaa3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.410570 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "971c052f-f64b-4615-a2b5-75ed777b146d" (UID: "971c052f-f64b-4615-a2b5-75ed777b146d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428381 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428435 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428481 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg5hw\" (UniqueName: \"kubernetes.io/projected/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-kube-api-access-kg5hw\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428505 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428534 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428616 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428652 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428702 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428713 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428723 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb8np\" (UniqueName: \"kubernetes.io/projected/aba2ef00-f69e-4357-b228-2bdb918bbaa3-kube-api-access-xb8np\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428732 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428740 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428748 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428756 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.428764 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7spgk\" (UniqueName: \"kubernetes.io/projected/971c052f-f64b-4615-a2b5-75ed777b146d-kube-api-access-7spgk\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.429372 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.429377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.429946 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.430266 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aba2ef00-f69e-4357-b228-2bdb918bbaa3" (UID: "aba2ef00-f69e-4357-b228-2bdb918bbaa3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.434663 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.443216 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.446352 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg5hw\" (UniqueName: \"kubernetes.io/projected/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-kube-api-access-kg5hw\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.450963 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.458594 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-config" (OuterVolumeSpecName: "config") pod "aba2ef00-f69e-4357-b228-2bdb918bbaa3" (UID: "aba2ef00-f69e-4357-b228-2bdb918bbaa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.486413 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aba2ef00-f69e-4357-b228-2bdb918bbaa3" (UID: "aba2ef00-f69e-4357-b228-2bdb918bbaa3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.489285 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "971c052f-f64b-4615-a2b5-75ed777b146d" (UID: "971c052f-f64b-4615-a2b5-75ed777b146d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.492717 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.531233 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.531312 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.531356 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba2ef00-f69e-4357-b228-2bdb918bbaa3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.531370 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/971c052f-f64b-4615-a2b5-75ed777b146d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.548377 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.662660 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" event={"ID":"aba2ef00-f69e-4357-b228-2bdb918bbaa3","Type":"ContainerDied","Data":"e17f267909f7e5c10aa33ea241acd8b0ae789995e4a7ea1d195c6cd4747a0567"} Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.662687 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58647bbf65-6tpvp" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.662754 4746 scope.go:117] "RemoveContainer" containerID="fe6c3a21aae92f79e2a879b2b5d28fa94686fc1443b7a47cc9fddc7382f7cdb6" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.667646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" event={"ID":"c3c163ba-2cf5-4100-a7d5-4fffc157a73a","Type":"ContainerStarted","Data":"6cadfe3522a883b73ace2837fab4fe32f0d7b4744796ad92c78cdbad1e05cbd4"} Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.681664 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" event={"ID":"971c052f-f64b-4615-a2b5-75ed777b146d","Type":"ContainerDied","Data":"093a5d57c225bb8a99bbbef36d761e1350c710ff0e9a3bf90e2a8813723d21dd"} Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.681847 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd458c8cc-wsfrq" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.754919 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.833001 4746 scope.go:117] "RemoveContainer" containerID="f72e5836a97820176942ab652564e336dbe357701ce87245fd28eda9f194b0ec" Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.862492 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-6tpvp"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.905930 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58647bbf65-6tpvp"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.960394 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-wsfrq"] Jan 29 16:55:57 crc kubenswrapper[4746]: I0129 16:55:57.969933 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fd458c8cc-wsfrq"] Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.302616 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.469100 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971c052f-f64b-4615-a2b5-75ed777b146d" path="/var/lib/kubelet/pods/971c052f-f64b-4615-a2b5-75ed777b146d/volumes" Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.471099 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba2ef00-f69e-4357-b228-2bdb918bbaa3" path="/var/lib/kubelet/pods/aba2ef00-f69e-4357-b228-2bdb918bbaa3/volumes" Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.712061 4746 generic.go:334] "Generic (PLEG): container finished" podID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerID="10d5fcb690d71667f468afdd2d5dae0076eebf13fdc60a095952f53c082b9447" exitCode=0 Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.712124 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" event={"ID":"c3c163ba-2cf5-4100-a7d5-4fffc157a73a","Type":"ContainerDied","Data":"10d5fcb690d71667f468afdd2d5dae0076eebf13fdc60a095952f53c082b9447"} Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.728882 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d","Type":"ContainerStarted","Data":"da16e268780aae2f2096e8a3b76197f1b9300a05b3f3e3b70571736538ae1031"} Jan 29 16:55:58 crc kubenswrapper[4746]: I0129 16:55:58.731644 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e","Type":"ContainerStarted","Data":"e084816187201dfadc51588aa21de9edc64f543b6ee8be7bb172f8014a25577f"} Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.757918 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e","Type":"ContainerStarted","Data":"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c"} Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.762031 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d","Type":"ContainerStarted","Data":"46bf034a43d4e5c104cd5e22da54aa0dbb12d0f6fd01f16f9c0be8a9b1d5abac"} Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.762072 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d","Type":"ContainerStarted","Data":"b3bdef199855009893c794c955449b488adb6bb6496a4fe4199a17055b444ba3"} Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.770013 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" event={"ID":"c3c163ba-2cf5-4100-a7d5-4fffc157a73a","Type":"ContainerStarted","Data":"4efa12f590ada733c93d1bb73bcf0260dc99cb00f8319a4619f1e025f4c73162"} Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.770299 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.789555 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.789537887 podStartE2EDuration="3.789537887s" podCreationTimestamp="2026-01-29 16:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:59.786380463 +0000 UTC m=+1282.186965107" watchObservedRunningTime="2026-01-29 16:55:59.789537887 +0000 UTC m=+1282.190122531" Jan 29 16:55:59 crc kubenswrapper[4746]: I0129 16:55:59.807840 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" podStartSLOduration=4.807817304 podStartE2EDuration="4.807817304s" podCreationTimestamp="2026-01-29 16:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:59.800335775 +0000 UTC m=+1282.200920429" watchObservedRunningTime="2026-01-29 16:55:59.807817304 +0000 UTC m=+1282.208401948" Jan 29 16:56:04 crc kubenswrapper[4746]: I0129 16:56:04.174175 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:56:04 crc kubenswrapper[4746]: I0129 16:56:04.176058 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-log" containerID="cri-o://b3bdef199855009893c794c955449b488adb6bb6496a4fe4199a17055b444ba3" gracePeriod=30 Jan 29 16:56:04 crc kubenswrapper[4746]: I0129 16:56:04.176203 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-httpd" containerID="cri-o://46bf034a43d4e5c104cd5e22da54aa0dbb12d0f6fd01f16f9c0be8a9b1d5abac" gracePeriod=30 Jan 29 16:56:04 crc kubenswrapper[4746]: I0129 16:56:04.322454 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:56:04 crc kubenswrapper[4746]: I0129 16:56:04.811718 4746 generic.go:334] "Generic (PLEG): container finished" podID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerID="b3bdef199855009893c794c955449b488adb6bb6496a4fe4199a17055b444ba3" exitCode=143 Jan 29 16:56:04 crc kubenswrapper[4746]: I0129 16:56:04.811850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d","Type":"ContainerDied","Data":"b3bdef199855009893c794c955449b488adb6bb6496a4fe4199a17055b444ba3"} Jan 29 16:56:05 crc kubenswrapper[4746]: I0129 16:56:05.821503 4746 generic.go:334] "Generic (PLEG): container finished" podID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerID="46bf034a43d4e5c104cd5e22da54aa0dbb12d0f6fd01f16f9c0be8a9b1d5abac" exitCode=0 Jan 29 16:56:05 crc kubenswrapper[4746]: I0129 16:56:05.821546 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d","Type":"ContainerDied","Data":"46bf034a43d4e5c104cd5e22da54aa0dbb12d0f6fd01f16f9c0be8a9b1d5abac"} Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.353487 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.422349 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9lnns"] Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.422570 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="dnsmasq-dns" containerID="cri-o://e1e18334bb7b47583f796919d0747b0d3f1ab0fcf0e5c5e253129d612ce6d1d0" gracePeriod=10 Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.840621 4746 generic.go:334] "Generic (PLEG): container finished" podID="aba66b06-8858-4c3c-abce-f263597324fb" containerID="e1e18334bb7b47583f796919d0747b0d3f1ab0fcf0e5c5e253129d612ce6d1d0" exitCode=0 Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.840701 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" event={"ID":"aba66b06-8858-4c3c-abce-f263597324fb","Type":"ContainerDied","Data":"e1e18334bb7b47583f796919d0747b0d3f1ab0fcf0e5c5e253129d612ce6d1d0"} Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.842595 4746 generic.go:334] "Generic (PLEG): container finished" podID="7cb85237-9d3b-4f95-b034-f1cd4dffb55c" containerID="564993abe23f748bcad00bc227395a3a07b6f9bffbb87815e97f254c228f5be2" exitCode=0 Jan 29 16:56:06 crc kubenswrapper[4746]: I0129 16:56:06.842619 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nbcbj" event={"ID":"7cb85237-9d3b-4f95-b034-f1cd4dffb55c","Type":"ContainerDied","Data":"564993abe23f748bcad00bc227395a3a07b6f9bffbb87815e97f254c228f5be2"} Jan 29 16:56:08 crc kubenswrapper[4746]: I0129 16:56:08.865088 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Jan 29 16:56:13 crc kubenswrapper[4746]: I0129 16:56:13.865869 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.013362 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.112306 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-fernet-keys\") pod \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.112351 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-config-data\") pod \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.112375 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlsxq\" (UniqueName: \"kubernetes.io/projected/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-kube-api-access-mlsxq\") pod \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.112518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-scripts\") pod \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.112599 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-combined-ca-bundle\") pod \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.112666 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-credential-keys\") pod \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\" (UID: \"7cb85237-9d3b-4f95-b034-f1cd4dffb55c\") " Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.142551 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-kube-api-access-mlsxq" (OuterVolumeSpecName: "kube-api-access-mlsxq") pod "7cb85237-9d3b-4f95-b034-f1cd4dffb55c" (UID: "7cb85237-9d3b-4f95-b034-f1cd4dffb55c"). InnerVolumeSpecName "kube-api-access-mlsxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.143685 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7cb85237-9d3b-4f95-b034-f1cd4dffb55c" (UID: "7cb85237-9d3b-4f95-b034-f1cd4dffb55c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.146434 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7cb85237-9d3b-4f95-b034-f1cd4dffb55c" (UID: "7cb85237-9d3b-4f95-b034-f1cd4dffb55c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.159445 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-scripts" (OuterVolumeSpecName: "scripts") pod "7cb85237-9d3b-4f95-b034-f1cd4dffb55c" (UID: "7cb85237-9d3b-4f95-b034-f1cd4dffb55c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.169468 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-config-data" (OuterVolumeSpecName: "config-data") pod "7cb85237-9d3b-4f95-b034-f1cd4dffb55c" (UID: "7cb85237-9d3b-4f95-b034-f1cd4dffb55c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.187928 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cb85237-9d3b-4f95-b034-f1cd4dffb55c" (UID: "7cb85237-9d3b-4f95-b034-f1cd4dffb55c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.220468 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.220510 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.220522 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlsxq\" (UniqueName: \"kubernetes.io/projected/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-kube-api-access-mlsxq\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.220536 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.220546 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.220557 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7cb85237-9d3b-4f95-b034-f1cd4dffb55c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.936053 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nbcbj" event={"ID":"7cb85237-9d3b-4f95-b034-f1cd4dffb55c","Type":"ContainerDied","Data":"6028bd65c0bf0a94f233aec3e6e88f7ae6ed4b7c4612d538e05667c2886c0e7d"} Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.936363 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6028bd65c0bf0a94f233aec3e6e88f7ae6ed4b7c4612d538e05667c2886c0e7d" Jan 29 16:56:17 crc kubenswrapper[4746]: I0129 16:56:17.936440 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nbcbj" Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.109112 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.109529 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q57s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-ls92k_openstack(5a81565e-25dc-4269-8e78-c953acef207b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.110848 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nbcbj"] Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.110861 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-ls92k" podUID="5a81565e-25dc-4269-8e78-c953acef207b" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.118762 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nbcbj"] Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.206416 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mmllj"] Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.206817 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971c052f-f64b-4615-a2b5-75ed777b146d" containerName="init" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.206834 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="971c052f-f64b-4615-a2b5-75ed777b146d" containerName="init" Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.206848 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba2ef00-f69e-4357-b228-2bdb918bbaa3" containerName="init" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.206855 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba2ef00-f69e-4357-b228-2bdb918bbaa3" containerName="init" Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.206899 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb85237-9d3b-4f95-b034-f1cd4dffb55c" containerName="keystone-bootstrap" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.206907 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb85237-9d3b-4f95-b034-f1cd4dffb55c" containerName="keystone-bootstrap" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.207119 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba2ef00-f69e-4357-b228-2bdb918bbaa3" containerName="init" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.207142 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="971c052f-f64b-4615-a2b5-75ed777b146d" containerName="init" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.207157 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb85237-9d3b-4f95-b034-f1cd4dffb55c" containerName="keystone-bootstrap" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.207928 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.212723 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.212867 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.212894 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.213011 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.213166 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f9wb5" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.227565 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mmllj"] Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.309278 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.343039 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.351409 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-config-data\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.351495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-credential-keys\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.351525 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-scripts\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.351594 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-fernet-keys\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.351631 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-combined-ca-bundle\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.351658 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmz9h\" (UniqueName: \"kubernetes.io/projected/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-kube-api-access-nmz9h\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.452565 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-config\") pod \"aba66b06-8858-4c3c-abce-f263597324fb\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.452881 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.452927 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-httpd-run\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.452949 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-scripts\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.452972 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-sb\") pod \"aba66b06-8858-4c3c-abce-f263597324fb\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453033 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78g5k\" (UniqueName: \"kubernetes.io/projected/aba66b06-8858-4c3c-abce-f263597324fb-kube-api-access-78g5k\") pod \"aba66b06-8858-4c3c-abce-f263597324fb\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453070 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-logs\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453094 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-combined-ca-bundle\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453137 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgkkt\" (UniqueName: \"kubernetes.io/projected/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-kube-api-access-tgkkt\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453430 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453491 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-config-data\") pod \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\" (UID: \"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453535 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-svc\") pod \"aba66b06-8858-4c3c-abce-f263597324fb\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453584 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-swift-storage-0\") pod \"aba66b06-8858-4c3c-abce-f263597324fb\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453662 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-nb\") pod \"aba66b06-8858-4c3c-abce-f263597324fb\" (UID: \"aba66b06-8858-4c3c-abce-f263597324fb\") " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453856 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-config-data\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-credential-keys\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453934 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-scripts\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.453990 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-fernet-keys\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.454011 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-combined-ca-bundle\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.454030 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmz9h\" (UniqueName: \"kubernetes.io/projected/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-kube-api-access-nmz9h\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.454111 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.458885 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba66b06-8858-4c3c-abce-f263597324fb-kube-api-access-78g5k" (OuterVolumeSpecName: "kube-api-access-78g5k") pod "aba66b06-8858-4c3c-abce-f263597324fb" (UID: "aba66b06-8858-4c3c-abce-f263597324fb"). InnerVolumeSpecName "kube-api-access-78g5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.461450 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-logs" (OuterVolumeSpecName: "logs") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.463955 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-combined-ca-bundle\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.464051 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-scripts" (OuterVolumeSpecName: "scripts") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.465012 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb85237-9d3b-4f95-b034-f1cd4dffb55c" path="/var/lib/kubelet/pods/7cb85237-9d3b-4f95-b034-f1cd4dffb55c/volumes" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.466137 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.467172 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-scripts\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.467395 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-kube-api-access-tgkkt" (OuterVolumeSpecName: "kube-api-access-tgkkt") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "kube-api-access-tgkkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.466073 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-config-data\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.467571 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-credential-keys\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.467707 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-fernet-keys\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.471999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmz9h\" (UniqueName: \"kubernetes.io/projected/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-kube-api-access-nmz9h\") pod \"keystone-bootstrap-mmllj\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.517845 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aba66b06-8858-4c3c-abce-f263597324fb" (UID: "aba66b06-8858-4c3c-abce-f263597324fb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.519807 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.532532 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aba66b06-8858-4c3c-abce-f263597324fb" (UID: "aba66b06-8858-4c3c-abce-f263597324fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.535070 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-config" (OuterVolumeSpecName: "config") pod "aba66b06-8858-4c3c-abce-f263597324fb" (UID: "aba66b06-8858-4c3c-abce-f263597324fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.539407 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aba66b06-8858-4c3c-abce-f263597324fb" (UID: "aba66b06-8858-4c3c-abce-f263597324fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.543079 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-config-data" (OuterVolumeSpecName: "config-data") pod "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" (UID: "9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.551149 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aba66b06-8858-4c3c-abce-f263597324fb" (UID: "aba66b06-8858-4c3c-abce-f263597324fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555732 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555763 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555775 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555787 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555796 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555815 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555823 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555831 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aba66b06-8858-4c3c-abce-f263597324fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555840 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78g5k\" (UniqueName: \"kubernetes.io/projected/aba66b06-8858-4c3c-abce-f263597324fb-kube-api-access-78g5k\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555847 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555855 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.555863 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgkkt\" (UniqueName: \"kubernetes.io/projected/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d-kube-api-access-tgkkt\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.573032 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.657416 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.672766 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.948352 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.948369 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d","Type":"ContainerDied","Data":"da16e268780aae2f2096e8a3b76197f1b9300a05b3f3e3b70571736538ae1031"} Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.948809 4746 scope.go:117] "RemoveContainer" containerID="46bf034a43d4e5c104cd5e22da54aa0dbb12d0f6fd01f16f9c0be8a9b1d5abac" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.952157 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w422g" event={"ID":"380b315f-5021-4a7c-892b-99545fb9c5cd","Type":"ContainerStarted","Data":"2274a1f7eecc79da00178b91883fca776d9b8582250496e9800b7b5bdfcb84ba"} Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.955350 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.955364 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-9lnns" event={"ID":"aba66b06-8858-4c3c-abce-f263597324fb","Type":"ContainerDied","Data":"2e669ca7732f90b0b1bde42cbe929ed369d4f5897ed286275c556d39997f835f"} Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.969684 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2tsk" event={"ID":"abc85a95-136d-4ffe-97ab-adea84894a76","Type":"ContainerStarted","Data":"fb937fb01141dd73f6a7ebd7e0fdab6b206a2dc1e58ed1435f1143de26bf2408"} Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.972143 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerStarted","Data":"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f"} Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.972437 4746 scope.go:117] "RemoveContainer" containerID="b3bdef199855009893c794c955449b488adb6bb6496a4fe4199a17055b444ba3" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.978430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e","Type":"ContainerStarted","Data":"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f"} Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.978555 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-log" containerID="cri-o://92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c" gracePeriod=30 Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.978608 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-httpd" containerID="cri-o://321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f" gracePeriod=30 Jan 29 16:56:18 crc kubenswrapper[4746]: E0129 16:56:18.986632 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-ls92k" podUID="5a81565e-25dc-4269-8e78-c953acef207b" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.989536 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w422g" podStartSLOduration=2.422942308 podStartE2EDuration="24.989510915s" podCreationTimestamp="2026-01-29 16:55:54 +0000 UTC" firstStartedPulling="2026-01-29 16:55:55.506389754 +0000 UTC m=+1277.906974398" lastFinishedPulling="2026-01-29 16:56:18.072958361 +0000 UTC m=+1300.473543005" observedRunningTime="2026-01-29 16:56:18.982990417 +0000 UTC m=+1301.383575081" watchObservedRunningTime="2026-01-29 16:56:18.989510915 +0000 UTC m=+1301.390095559" Jan 29 16:56:18 crc kubenswrapper[4746]: I0129 16:56:18.998179 4746 scope.go:117] "RemoveContainer" containerID="e1e18334bb7b47583f796919d0747b0d3f1ab0fcf0e5c5e253129d612ce6d1d0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.012943 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.021226 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.047298 4746 scope.go:117] "RemoveContainer" containerID="40066284a95a4d9ea3228128f5d2792aac904ab47eee9df592a7b22d5021b9bb" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.065748 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.065808 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120032 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:56:19 crc kubenswrapper[4746]: E0129 16:56:19.120712 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="init" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120727 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="init" Jan 29 16:56:19 crc kubenswrapper[4746]: E0129 16:56:19.120750 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-httpd" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120757 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-httpd" Jan 29 16:56:19 crc kubenswrapper[4746]: E0129 16:56:19.120771 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-log" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120776 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-log" Jan 29 16:56:19 crc kubenswrapper[4746]: E0129 16:56:19.120788 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="dnsmasq-dns" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120794 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="dnsmasq-dns" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120949 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba66b06-8858-4c3c-abce-f263597324fb" containerName="dnsmasq-dns" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120962 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-httpd" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.120976 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" containerName="glance-log" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.121914 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.126012 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.126513 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.127644 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.1276252 podStartE2EDuration="23.1276252s" podCreationTimestamp="2026-01-29 16:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:19.042520647 +0000 UTC m=+1301.443105301" watchObservedRunningTime="2026-01-29 16:56:19.1276252 +0000 UTC m=+1301.528209844" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.151007 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.157241 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-d2tsk" podStartSLOduration=2.579862185 podStartE2EDuration="25.15721964s" podCreationTimestamp="2026-01-29 16:55:54 +0000 UTC" firstStartedPulling="2026-01-29 16:55:55.49576564 +0000 UTC m=+1277.896350284" lastFinishedPulling="2026-01-29 16:56:18.073123085 +0000 UTC m=+1300.473707739" observedRunningTime="2026-01-29 16:56:19.073700862 +0000 UTC m=+1301.474285506" watchObservedRunningTime="2026-01-29 16:56:19.15721964 +0000 UTC m=+1301.557804284" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.169816 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.169866 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.169913 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.169947 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngjrt\" (UniqueName: \"kubernetes.io/projected/0016fc12-9058-415e-be92-a37e69d56c58-kube-api-access-ngjrt\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.170006 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-logs\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.170033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.170077 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-config-data\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.170103 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-scripts\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.176917 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9lnns"] Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.186851 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-9lnns"] Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.203872 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mmllj"] Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271591 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271628 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271653 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngjrt\" (UniqueName: \"kubernetes.io/projected/0016fc12-9058-415e-be92-a37e69d56c58-kube-api-access-ngjrt\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271695 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-logs\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271746 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-config-data\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.271764 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-scripts\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.272492 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.272608 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.273889 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-logs\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.278459 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.279400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-config-data\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.280638 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-scripts\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.290679 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.296777 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngjrt\" (UniqueName: \"kubernetes.io/projected/0016fc12-9058-415e-be92-a37e69d56c58-kube-api-access-ngjrt\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.321829 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.353575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.863930 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.889636 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-combined-ca-bundle\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.889711 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-logs\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.889822 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-config-data\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.889889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg5hw\" (UniqueName: \"kubernetes.io/projected/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-kube-api-access-kg5hw\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.889981 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-httpd-run\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.890019 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.890080 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-scripts\") pod \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\" (UID: \"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e\") " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.894716 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.896000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-scripts" (OuterVolumeSpecName: "scripts") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.896397 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-logs" (OuterVolumeSpecName: "logs") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.898420 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.902480 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-kube-api-access-kg5hw" (OuterVolumeSpecName: "kube-api-access-kg5hw") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "kube-api-access-kg5hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.942697 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.955257 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-config-data" (OuterVolumeSpecName: "config-data") pod "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" (UID: "e4bf9311-12f6-48fe-a438-4cd40f2c4b6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.956552 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:56:19 crc kubenswrapper[4746]: W0129 16:56:19.959883 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0016fc12_9058_415e_be92_a37e69d56c58.slice/crio-0a3b59f0d52b4e3bd4c899e1d565b6f131e3c90e8c6c77e4f6e030cf57378a20 WatchSource:0}: Error finding container 0a3b59f0d52b4e3bd4c899e1d565b6f131e3c90e8c6c77e4f6e030cf57378a20: Status 404 returned error can't find the container with id 0a3b59f0d52b4e3bd4c899e1d565b6f131e3c90e8c6c77e4f6e030cf57378a20 Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992678 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992735 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992748 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992762 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992772 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992781 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:19 crc kubenswrapper[4746]: I0129 16:56:19.992791 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg5hw\" (UniqueName: \"kubernetes.io/projected/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e-kube-api-access-kg5hw\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005117 4746 generic.go:334] "Generic (PLEG): container finished" podID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerID="321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f" exitCode=0 Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005152 4746 generic.go:334] "Generic (PLEG): container finished" podID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerID="92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c" exitCode=143 Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005256 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e","Type":"ContainerDied","Data":"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f"} Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005290 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e","Type":"ContainerDied","Data":"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c"} Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005304 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4bf9311-12f6-48fe-a438-4cd40f2c4b6e","Type":"ContainerDied","Data":"e084816187201dfadc51588aa21de9edc64f543b6ee8be7bb172f8014a25577f"} Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005331 4746 scope.go:117] "RemoveContainer" containerID="321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.005480 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.013256 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0016fc12-9058-415e-be92-a37e69d56c58","Type":"ContainerStarted","Data":"0a3b59f0d52b4e3bd4c899e1d565b6f131e3c90e8c6c77e4f6e030cf57378a20"} Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.017307 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmllj" event={"ID":"f1fdba39-b67b-4ab6-af7d-c254d8f725e7","Type":"ContainerStarted","Data":"a10fd66061f7bcabb2d53d2602e9c073d65b49b0cbcf8a3376eb2de3bc4e75af"} Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.017358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmllj" event={"ID":"f1fdba39-b67b-4ab6-af7d-c254d8f725e7","Type":"ContainerStarted","Data":"106bfd614c9b42b8278898155a0171a8606ca663873fdcc7ec62eb62fdd24a21"} Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.022832 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.040239 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mmllj" podStartSLOduration=2.040222226 podStartE2EDuration="2.040222226s" podCreationTimestamp="2026-01-29 16:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:20.037581694 +0000 UTC m=+1302.438166338" watchObservedRunningTime="2026-01-29 16:56:20.040222226 +0000 UTC m=+1302.440806870" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.058386 4746 scope.go:117] "RemoveContainer" containerID="92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.061741 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.083667 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.094299 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.097687 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:56:20 crc kubenswrapper[4746]: E0129 16:56:20.098153 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-log" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.098171 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-log" Jan 29 16:56:20 crc kubenswrapper[4746]: E0129 16:56:20.098204 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-httpd" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.098213 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-httpd" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.098457 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-httpd" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.098482 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" containerName="glance-log" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.099747 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.108591 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.108848 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.111575 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.159001 4746 scope.go:117] "RemoveContainer" containerID="321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f" Jan 29 16:56:20 crc kubenswrapper[4746]: E0129 16:56:20.159501 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f\": container with ID starting with 321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f not found: ID does not exist" containerID="321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.159601 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f"} err="failed to get container status \"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f\": rpc error: code = NotFound desc = could not find container \"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f\": container with ID starting with 321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f not found: ID does not exist" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.159687 4746 scope.go:117] "RemoveContainer" containerID="92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c" Jan 29 16:56:20 crc kubenswrapper[4746]: E0129 16:56:20.160420 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c\": container with ID starting with 92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c not found: ID does not exist" containerID="92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.160485 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c"} err="failed to get container status \"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c\": rpc error: code = NotFound desc = could not find container \"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c\": container with ID starting with 92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c not found: ID does not exist" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.160514 4746 scope.go:117] "RemoveContainer" containerID="321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.160895 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f"} err="failed to get container status \"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f\": rpc error: code = NotFound desc = could not find container \"321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f\": container with ID starting with 321659fb1a313c279a06b2b5add7660d6c5420d41fde2989f77b75eb8aaac45f not found: ID does not exist" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.160918 4746 scope.go:117] "RemoveContainer" containerID="92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.161204 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c"} err="failed to get container status \"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c\": rpc error: code = NotFound desc = could not find container \"92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c\": container with ID starting with 92f574021cec1fc859df64c7d26563e4f292da2ae739bea92ae42f6392e27b1c not found: ID does not exist" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196560 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196586 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196616 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l922\" (UniqueName: \"kubernetes.io/projected/338e74fb-ad8e-44b8-a56f-cb984371a8f4-kube-api-access-4l922\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196646 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-logs\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196668 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196699 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.196959 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.298481 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.298832 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l922\" (UniqueName: \"kubernetes.io/projected/338e74fb-ad8e-44b8-a56f-cb984371a8f4-kube-api-access-4l922\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.298862 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-logs\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.298888 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.298925 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.298980 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.299034 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.299094 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.299129 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.299633 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.302966 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-logs\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.304531 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.307723 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.317652 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.318259 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l922\" (UniqueName: \"kubernetes.io/projected/338e74fb-ad8e-44b8-a56f-cb984371a8f4-kube-api-access-4l922\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.318903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.330225 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.429682 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.472813 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d" path="/var/lib/kubelet/pods/9885c139-ad1a-4ffd-87e4-4f0eb5fdec0d/volumes" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.474070 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba66b06-8858-4c3c-abce-f263597324fb" path="/var/lib/kubelet/pods/aba66b06-8858-4c3c-abce-f263597324fb/volumes" Jan 29 16:56:20 crc kubenswrapper[4746]: I0129 16:56:20.475157 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bf9311-12f6-48fe-a438-4cd40f2c4b6e" path="/var/lib/kubelet/pods/e4bf9311-12f6-48fe-a438-4cd40f2c4b6e/volumes" Jan 29 16:56:21 crc kubenswrapper[4746]: I0129 16:56:21.060169 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:56:21 crc kubenswrapper[4746]: I0129 16:56:21.061731 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0016fc12-9058-415e-be92-a37e69d56c58","Type":"ContainerStarted","Data":"d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc"} Jan 29 16:56:21 crc kubenswrapper[4746]: I0129 16:56:21.067026 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerStarted","Data":"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b"} Jan 29 16:56:22 crc kubenswrapper[4746]: I0129 16:56:22.080693 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0016fc12-9058-415e-be92-a37e69d56c58","Type":"ContainerStarted","Data":"9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806"} Jan 29 16:56:22 crc kubenswrapper[4746]: I0129 16:56:22.092900 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"338e74fb-ad8e-44b8-a56f-cb984371a8f4","Type":"ContainerStarted","Data":"066bbca36d5f6c3b28e4f47c6fe7cc6dbd45acfb34c1754e61e1c8ac6bafad31"} Jan 29 16:56:22 crc kubenswrapper[4746]: I0129 16:56:22.092958 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"338e74fb-ad8e-44b8-a56f-cb984371a8f4","Type":"ContainerStarted","Data":"d9fe517129ac2760892fca8b5e75cd7c053f2bd571c0b99dceb1cdf635e997a8"} Jan 29 16:56:22 crc kubenswrapper[4746]: I0129 16:56:22.105995 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.10597897 podStartE2EDuration="3.10597897s" podCreationTimestamp="2026-01-29 16:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:22.103955914 +0000 UTC m=+1304.504540598" watchObservedRunningTime="2026-01-29 16:56:22.10597897 +0000 UTC m=+1304.506563614" Jan 29 16:56:23 crc kubenswrapper[4746]: I0129 16:56:23.112615 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"338e74fb-ad8e-44b8-a56f-cb984371a8f4","Type":"ContainerStarted","Data":"a7091a22f772906d88642a0d2bc3ba5747d115f3c1694c2d9497b8a37ca3aaa2"} Jan 29 16:56:23 crc kubenswrapper[4746]: I0129 16:56:23.143284 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.143261922 podStartE2EDuration="3.143261922s" podCreationTimestamp="2026-01-29 16:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:23.134574924 +0000 UTC m=+1305.535159588" watchObservedRunningTime="2026-01-29 16:56:23.143261922 +0000 UTC m=+1305.543846576" Jan 29 16:56:24 crc kubenswrapper[4746]: I0129 16:56:24.124806 4746 generic.go:334] "Generic (PLEG): container finished" podID="f1fdba39-b67b-4ab6-af7d-c254d8f725e7" containerID="a10fd66061f7bcabb2d53d2602e9c073d65b49b0cbcf8a3376eb2de3bc4e75af" exitCode=0 Jan 29 16:56:24 crc kubenswrapper[4746]: I0129 16:56:24.124956 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmllj" event={"ID":"f1fdba39-b67b-4ab6-af7d-c254d8f725e7","Type":"ContainerDied","Data":"a10fd66061f7bcabb2d53d2602e9c073d65b49b0cbcf8a3376eb2de3bc4e75af"} Jan 29 16:56:24 crc kubenswrapper[4746]: I0129 16:56:24.128711 4746 generic.go:334] "Generic (PLEG): container finished" podID="380b315f-5021-4a7c-892b-99545fb9c5cd" containerID="2274a1f7eecc79da00178b91883fca776d9b8582250496e9800b7b5bdfcb84ba" exitCode=0 Jan 29 16:56:24 crc kubenswrapper[4746]: I0129 16:56:24.128780 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w422g" event={"ID":"380b315f-5021-4a7c-892b-99545fb9c5cd","Type":"ContainerDied","Data":"2274a1f7eecc79da00178b91883fca776d9b8582250496e9800b7b5bdfcb84ba"} Jan 29 16:56:24 crc kubenswrapper[4746]: I0129 16:56:24.135564 4746 generic.go:334] "Generic (PLEG): container finished" podID="abc85a95-136d-4ffe-97ab-adea84894a76" containerID="fb937fb01141dd73f6a7ebd7e0fdab6b206a2dc1e58ed1435f1143de26bf2408" exitCode=0 Jan 29 16:56:24 crc kubenswrapper[4746]: I0129 16:56:24.135977 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2tsk" event={"ID":"abc85a95-136d-4ffe-97ab-adea84894a76","Type":"ContainerDied","Data":"fb937fb01141dd73f6a7ebd7e0fdab6b206a2dc1e58ed1435f1143de26bf2408"} Jan 29 16:56:25 crc kubenswrapper[4746]: E0129 16:56:25.032514 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:25 crc kubenswrapper[4746]: E0129 16:56:25.032985 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kglrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(caba25c4-6465-4ebd-9075-bb6c9806e8ea): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:25 crc kubenswrapper[4746]: E0129 16:56:25.034227 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.160411 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerStarted","Data":"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5"} Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.160804 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-central-agent" containerID="cri-o://31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" gracePeriod=30 Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.160884 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="sg-core" containerID="cri-o://80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" gracePeriod=30 Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.160912 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-notification-agent" containerID="cri-o://ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" gracePeriod=30 Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.612536 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.630357 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w422g" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.636486 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748372 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-config-data\") pod \"380b315f-5021-4a7c-892b-99545fb9c5cd\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748435 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9zr6\" (UniqueName: \"kubernetes.io/projected/abc85a95-136d-4ffe-97ab-adea84894a76-kube-api-access-z9zr6\") pod \"abc85a95-136d-4ffe-97ab-adea84894a76\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748547 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/380b315f-5021-4a7c-892b-99545fb9c5cd-logs\") pod \"380b315f-5021-4a7c-892b-99545fb9c5cd\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748582 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-combined-ca-bundle\") pod \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748665 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-combined-ca-bundle\") pod \"abc85a95-136d-4ffe-97ab-adea84894a76\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748705 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-fernet-keys\") pod \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748729 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmz9h\" (UniqueName: \"kubernetes.io/projected/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-kube-api-access-nmz9h\") pod \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748775 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-config-data\") pod \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748939 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-scripts\") pod \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.748991 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h889t\" (UniqueName: \"kubernetes.io/projected/380b315f-5021-4a7c-892b-99545fb9c5cd-kube-api-access-h889t\") pod \"380b315f-5021-4a7c-892b-99545fb9c5cd\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.749065 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-db-sync-config-data\") pod \"abc85a95-136d-4ffe-97ab-adea84894a76\" (UID: \"abc85a95-136d-4ffe-97ab-adea84894a76\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.749101 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-credential-keys\") pod \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\" (UID: \"f1fdba39-b67b-4ab6-af7d-c254d8f725e7\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.749131 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-scripts\") pod \"380b315f-5021-4a7c-892b-99545fb9c5cd\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.749158 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-combined-ca-bundle\") pod \"380b315f-5021-4a7c-892b-99545fb9c5cd\" (UID: \"380b315f-5021-4a7c-892b-99545fb9c5cd\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.755998 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-kube-api-access-nmz9h" (OuterVolumeSpecName: "kube-api-access-nmz9h") pod "f1fdba39-b67b-4ab6-af7d-c254d8f725e7" (UID: "f1fdba39-b67b-4ab6-af7d-c254d8f725e7"). InnerVolumeSpecName "kube-api-access-nmz9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.756340 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/380b315f-5021-4a7c-892b-99545fb9c5cd-logs" (OuterVolumeSpecName: "logs") pod "380b315f-5021-4a7c-892b-99545fb9c5cd" (UID: "380b315f-5021-4a7c-892b-99545fb9c5cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.761301 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-scripts" (OuterVolumeSpecName: "scripts") pod "f1fdba39-b67b-4ab6-af7d-c254d8f725e7" (UID: "f1fdba39-b67b-4ab6-af7d-c254d8f725e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.762934 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380b315f-5021-4a7c-892b-99545fb9c5cd-kube-api-access-h889t" (OuterVolumeSpecName: "kube-api-access-h889t") pod "380b315f-5021-4a7c-892b-99545fb9c5cd" (UID: "380b315f-5021-4a7c-892b-99545fb9c5cd"). InnerVolumeSpecName "kube-api-access-h889t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.767314 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-scripts" (OuterVolumeSpecName: "scripts") pod "380b315f-5021-4a7c-892b-99545fb9c5cd" (UID: "380b315f-5021-4a7c-892b-99545fb9c5cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.767933 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc85a95-136d-4ffe-97ab-adea84894a76-kube-api-access-z9zr6" (OuterVolumeSpecName: "kube-api-access-z9zr6") pod "abc85a95-136d-4ffe-97ab-adea84894a76" (UID: "abc85a95-136d-4ffe-97ab-adea84894a76"). InnerVolumeSpecName "kube-api-access-z9zr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.769180 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f1fdba39-b67b-4ab6-af7d-c254d8f725e7" (UID: "f1fdba39-b67b-4ab6-af7d-c254d8f725e7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.782415 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "abc85a95-136d-4ffe-97ab-adea84894a76" (UID: "abc85a95-136d-4ffe-97ab-adea84894a76"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.787415 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f1fdba39-b67b-4ab6-af7d-c254d8f725e7" (UID: "f1fdba39-b67b-4ab6-af7d-c254d8f725e7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.789766 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1fdba39-b67b-4ab6-af7d-c254d8f725e7" (UID: "f1fdba39-b67b-4ab6-af7d-c254d8f725e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.801988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abc85a95-136d-4ffe-97ab-adea84894a76" (UID: "abc85a95-136d-4ffe-97ab-adea84894a76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.802928 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-config-data" (OuterVolumeSpecName: "config-data") pod "380b315f-5021-4a7c-892b-99545fb9c5cd" (UID: "380b315f-5021-4a7c-892b-99545fb9c5cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.806464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-config-data" (OuterVolumeSpecName: "config-data") pod "f1fdba39-b67b-4ab6-af7d-c254d8f725e7" (UID: "f1fdba39-b67b-4ab6-af7d-c254d8f725e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.807110 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "380b315f-5021-4a7c-892b-99545fb9c5cd" (UID: "380b315f-5021-4a7c-892b-99545fb9c5cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852173 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852365 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852452 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852531 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852607 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380b315f-5021-4a7c-892b-99545fb9c5cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852682 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9zr6\" (UniqueName: \"kubernetes.io/projected/abc85a95-136d-4ffe-97ab-adea84894a76-kube-api-access-z9zr6\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852753 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/380b315f-5021-4a7c-892b-99545fb9c5cd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852834 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852908 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc85a95-136d-4ffe-97ab-adea84894a76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.852982 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.853153 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmz9h\" (UniqueName: \"kubernetes.io/projected/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-kube-api-access-nmz9h\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.853349 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.853473 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1fdba39-b67b-4ab6-af7d-c254d8f725e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.853610 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h889t\" (UniqueName: \"kubernetes.io/projected/380b315f-5021-4a7c-892b-99545fb9c5cd-kube-api-access-h889t\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.909205 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954565 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kglrq\" (UniqueName: \"kubernetes.io/projected/caba25c4-6465-4ebd-9075-bb6c9806e8ea-kube-api-access-kglrq\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954644 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-scripts\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954705 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-run-httpd\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954732 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-config-data\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954797 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-combined-ca-bundle\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954842 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-log-httpd\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.954863 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-sg-core-conf-yaml\") pod \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\" (UID: \"caba25c4-6465-4ebd-9075-bb6c9806e8ea\") " Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.955387 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.955635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.958727 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caba25c4-6465-4ebd-9075-bb6c9806e8ea-kube-api-access-kglrq" (OuterVolumeSpecName: "kube-api-access-kglrq") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "kube-api-access-kglrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.963700 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-scripts" (OuterVolumeSpecName: "scripts") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.976561 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:25 crc kubenswrapper[4746]: I0129 16:56:25.998828 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-config-data" (OuterVolumeSpecName: "config-data") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.005796 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caba25c4-6465-4ebd-9075-bb6c9806e8ea" (UID: "caba25c4-6465-4ebd-9075-bb6c9806e8ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.056648 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.057005 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.057106 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kglrq\" (UniqueName: \"kubernetes.io/projected/caba25c4-6465-4ebd-9075-bb6c9806e8ea-kube-api-access-kglrq\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.057211 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.057299 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/caba25c4-6465-4ebd-9075-bb6c9806e8ea-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.057376 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.057648 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caba25c4-6465-4ebd-9075-bb6c9806e8ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.177630 4746 generic.go:334] "Generic (PLEG): container finished" podID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerID="80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" exitCode=2 Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.178434 4746 generic.go:334] "Generic (PLEG): container finished" podID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerID="ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" exitCode=0 Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.178567 4746 generic.go:334] "Generic (PLEG): container finished" podID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerID="31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" exitCode=0 Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.177817 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerDied","Data":"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.178661 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerDied","Data":"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.178683 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerDied","Data":"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.178694 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"caba25c4-6465-4ebd-9075-bb6c9806e8ea","Type":"ContainerDied","Data":"d7abf523ea31cfc54538d84e3b6a05f72d92a7ec08ec4b4009a4c23ee30fd258"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.178711 4746 scope.go:117] "RemoveContainer" containerID="80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.177841 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.183371 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmllj" event={"ID":"f1fdba39-b67b-4ab6-af7d-c254d8f725e7","Type":"ContainerDied","Data":"106bfd614c9b42b8278898155a0171a8606ca663873fdcc7ec62eb62fdd24a21"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.183649 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106bfd614c9b42b8278898155a0171a8606ca663873fdcc7ec62eb62fdd24a21" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.183575 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmllj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.187526 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w422g" event={"ID":"380b315f-5021-4a7c-892b-99545fb9c5cd","Type":"ContainerDied","Data":"42706d3194d1621a7fdf550bf1c651b0865398b170afdaefc2901b587920904e"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.187572 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42706d3194d1621a7fdf550bf1c651b0865398b170afdaefc2901b587920904e" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.187627 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w422g" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.195800 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-d2tsk" event={"ID":"abc85a95-136d-4ffe-97ab-adea84894a76","Type":"ContainerDied","Data":"b2428c3fdfd6303eab1521a4fbf6f1610cbf1f2bf37408c2645b1099d3e66140"} Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.196102 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-d2tsk" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.201173 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2428c3fdfd6303eab1521a4fbf6f1610cbf1f2bf37408c2645b1099d3e66140" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.213335 4746 scope.go:117] "RemoveContainer" containerID="ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.252435 4746 scope.go:117] "RemoveContainer" containerID="31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.290327 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.303082 4746 scope.go:117] "RemoveContainer" containerID="80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.304429 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": container with ID starting with 80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5 not found: ID does not exist" containerID="80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.304474 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5"} err="failed to get container status \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": rpc error: code = NotFound desc = could not find container \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": container with ID starting with 80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5 not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.304492 4746 scope.go:117] "RemoveContainer" containerID="ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.308615 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": container with ID starting with ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b not found: ID does not exist" containerID="ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.308671 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b"} err="failed to get container status \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": rpc error: code = NotFound desc = could not find container \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": container with ID starting with ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.308710 4746 scope.go:117] "RemoveContainer" containerID="31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.317363 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": container with ID starting with 31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f not found: ID does not exist" containerID="31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.317404 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f"} err="failed to get container status \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": rpc error: code = NotFound desc = could not find container \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": container with ID starting with 31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.317428 4746 scope.go:117] "RemoveContainer" containerID="80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.319031 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.319110 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5"} err="failed to get container status \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": rpc error: code = NotFound desc = could not find container \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": container with ID starting with 80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5 not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.319127 4746 scope.go:117] "RemoveContainer" containerID="ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.320948 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b"} err="failed to get container status \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": rpc error: code = NotFound desc = could not find container \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": container with ID starting with ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.320968 4746 scope.go:117] "RemoveContainer" containerID="31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.321153 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f"} err="failed to get container status \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": rpc error: code = NotFound desc = could not find container \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": container with ID starting with 31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.321166 4746 scope.go:117] "RemoveContainer" containerID="80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.321349 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5"} err="failed to get container status \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": rpc error: code = NotFound desc = could not find container \"80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5\": container with ID starting with 80dee87d4088e4a1ba4f333bab6b023478f43c8c862db0984655828ba7f40ff5 not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.321364 4746 scope.go:117] "RemoveContainer" containerID="ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.321524 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b"} err="failed to get container status \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": rpc error: code = NotFound desc = could not find container \"ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b\": container with ID starting with ae6e62c2a772bc77b9606335d4203e8eca8eca698140a20393b0217780af9b9b not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.321536 4746 scope.go:117] "RemoveContainer" containerID="31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.327265 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f"} err="failed to get container status \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": rpc error: code = NotFound desc = could not find container \"31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f\": container with ID starting with 31251f80217a68928d5b4f50f7b9fff480cfedd7c69494eaa4e893fdb1c1dd5f not found: ID does not exist" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.335261 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.335912 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="sg-core" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.336005 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="sg-core" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.336097 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380b315f-5021-4a7c-892b-99545fb9c5cd" containerName="placement-db-sync" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.336169 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="380b315f-5021-4a7c-892b-99545fb9c5cd" containerName="placement-db-sync" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.336303 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc85a95-136d-4ffe-97ab-adea84894a76" containerName="barbican-db-sync" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.336388 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc85a95-136d-4ffe-97ab-adea84894a76" containerName="barbican-db-sync" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.336464 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1fdba39-b67b-4ab6-af7d-c254d8f725e7" containerName="keystone-bootstrap" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.336533 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1fdba39-b67b-4ab6-af7d-c254d8f725e7" containerName="keystone-bootstrap" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.336730 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-central-agent" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.336812 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-central-agent" Jan 29 16:56:26 crc kubenswrapper[4746]: E0129 16:56:26.336895 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-notification-agent" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.336963 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-notification-agent" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.337263 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1fdba39-b67b-4ab6-af7d-c254d8f725e7" containerName="keystone-bootstrap" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.337371 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc85a95-136d-4ffe-97ab-adea84894a76" containerName="barbican-db-sync" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.337459 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="sg-core" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.337543 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-notification-agent" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.337611 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" containerName="ceilometer-central-agent" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.337690 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="380b315f-5021-4a7c-892b-99545fb9c5cd" containerName="placement-db-sync" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.339635 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.349092 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.349363 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.371299 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fb46889d-2pzb6"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.372735 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.381149 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.381373 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.381579 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-zqz9p" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.382123 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.384939 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.402543 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f4c9c876f-dbjbj"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.403991 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.424624 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-f9wb5" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.424860 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.424996 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.425101 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.425271 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.430272 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.476523 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caba25c4-6465-4ebd-9075-bb6c9806e8ea" path="/var/lib/kubelet/pods/caba25c4-6465-4ebd-9075-bb6c9806e8ea/volumes" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.484565 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494131 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-config-data\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-config-data\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494233 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9zzb\" (UniqueName: \"kubernetes.io/projected/4041b083-1378-4515-8a9d-82219087e52a-kube-api-access-m9zzb\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-credential-keys\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494365 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-log-httpd\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494403 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-combined-ca-bundle\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb54f40-e963-4c6c-9a9c-03195655c57b-logs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494448 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-combined-ca-bundle\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494488 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-scripts\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-run-httpd\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494534 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-public-tls-certs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494565 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnpn\" (UniqueName: \"kubernetes.io/projected/adb54f40-e963-4c6c-9a9c-03195655c57b-kube-api-access-7dnpn\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494581 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494685 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-scripts\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494702 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-config-data\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494741 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-public-tls-certs\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494763 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbtc\" (UniqueName: \"kubernetes.io/projected/2d2a3529-662b-4eb6-aebd-c15e694cab4e-kube-api-access-6cbtc\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494780 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-internal-tls-certs\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494799 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-scripts\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494824 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-internal-tls-certs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.494839 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-fernet-keys\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.500345 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fb46889d-2pzb6"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.515134 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f4c9c876f-dbjbj"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.587125 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-668b7c6465-kf65r"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.589680 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.594408 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-scl5x" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.594610 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.594732 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596208 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9zzb\" (UniqueName: \"kubernetes.io/projected/4041b083-1378-4515-8a9d-82219087e52a-kube-api-access-m9zzb\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596242 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-config-data\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596268 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data-custom\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596289 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-credential-keys\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596307 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0621197e-b84a-4a66-bab3-ad9f4562c8f2-logs\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596323 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-combined-ca-bundle\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596340 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-log-httpd\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596357 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb54f40-e963-4c6c-9a9c-03195655c57b-logs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596372 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-combined-ca-bundle\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596398 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-scripts\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596419 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-run-httpd\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596439 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-public-tls-certs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596461 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnpn\" (UniqueName: \"kubernetes.io/projected/adb54f40-e963-4c6c-9a9c-03195655c57b-kube-api-access-7dnpn\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596478 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596519 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596542 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmfwh\" (UniqueName: \"kubernetes.io/projected/0621197e-b84a-4a66-bab3-ad9f4562c8f2-kube-api-access-dmfwh\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596561 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-combined-ca-bundle\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596579 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-scripts\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596597 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-config-data\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596621 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-public-tls-certs\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596643 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbtc\" (UniqueName: \"kubernetes.io/projected/2d2a3529-662b-4eb6-aebd-c15e694cab4e-kube-api-access-6cbtc\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596659 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-internal-tls-certs\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596677 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-scripts\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596694 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596714 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-internal-tls-certs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596745 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-fernet-keys\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.596771 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-config-data\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.600612 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-log-httpd\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.608173 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-combined-ca-bundle\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.610355 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-run-httpd\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.610849 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb54f40-e963-4c6c-9a9c-03195655c57b-logs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.617808 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-credential-keys\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.625658 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-internal-tls-certs\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.626907 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-scripts\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.627006 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.627704 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-config-data\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.627768 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-668b7c6465-kf65r"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.627884 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.627924 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-fernet-keys\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.628232 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-scripts\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.628400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-internal-tls-certs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.628456 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-config-data\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.630928 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-config-data\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.631407 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-scripts\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.631644 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9zzb\" (UniqueName: \"kubernetes.io/projected/4041b083-1378-4515-8a9d-82219087e52a-kube-api-access-m9zzb\") pod \"ceilometer-0\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.633782 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-public-tls-certs\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.634278 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-public-tls-certs\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.638214 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbtc\" (UniqueName: \"kubernetes.io/projected/2d2a3529-662b-4eb6-aebd-c15e694cab4e-kube-api-access-6cbtc\") pod \"keystone-6f4c9c876f-dbjbj\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.638992 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnpn\" (UniqueName: \"kubernetes.io/projected/adb54f40-e963-4c6c-9a9c-03195655c57b-kube-api-access-7dnpn\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.643277 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-combined-ca-bundle\") pod \"placement-7fb46889d-2pzb6\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.680112 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-84c78bb54-wv65z"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.707578 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.712114 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.724959 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.725264 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data-custom\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.725335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0621197e-b84a-4a66-bab3-ad9f4562c8f2-logs\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.725647 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmfwh\" (UniqueName: \"kubernetes.io/projected/0621197e-b84a-4a66-bab3-ad9f4562c8f2-kube-api-access-dmfwh\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.725702 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-combined-ca-bundle\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.736522 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0621197e-b84a-4a66-bab3-ad9f4562c8f2-logs\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.739866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.761554 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.762609 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84c78bb54-wv65z"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.765931 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmfwh\" (UniqueName: \"kubernetes.io/projected/0621197e-b84a-4a66-bab3-ad9f4562c8f2-kube-api-access-dmfwh\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.777121 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-combined-ca-bundle\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.782552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data-custom\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.785837 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.796630 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-5r4xd"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.798118 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.801938 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-5r4xd"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.814464 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data\") pod \"barbican-worker-668b7c6465-kf65r\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.832135 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data-custom\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.832217 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-config\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.832247 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-combined-ca-bundle\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.833121 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxh6\" (UniqueName: \"kubernetes.io/projected/cd947b0c-106a-45a0-95ae-5a7971e14e64-kube-api-access-tpxh6\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.833249 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8078ce3-8385-4c32-8284-2b7a416a5413-logs\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.834364 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.834400 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.834559 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdkv\" (UniqueName: \"kubernetes.io/projected/a8078ce3-8385-4c32-8284-2b7a416a5413-kube-api-access-ksdkv\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.834613 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-svc\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.834630 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.834662 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.842733 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-fd8d7b7c5-2bjng"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.845571 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.857937 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.907405 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-fd8d7b7c5-2bjng"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.921853 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64978c9b7d-d9wgb"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.924984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937239 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8078ce3-8385-4c32-8284-2b7a416a5413-logs\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937296 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksdkv\" (UniqueName: \"kubernetes.io/projected/a8078ce3-8385-4c32-8284-2b7a416a5413-kube-api-access-ksdkv\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937579 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gsq9\" (UniqueName: \"kubernetes.io/projected/f19d23b1-5d41-40a9-88ee-23a039de0ed7-kube-api-access-9gsq9\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937604 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-svc\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937722 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937799 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data-custom\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937907 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data-custom\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.937957 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-combined-ca-bundle\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.938009 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-config\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.938054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-combined-ca-bundle\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.938087 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d23b1-5d41-40a9-88ee-23a039de0ed7-logs\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.938123 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxh6\" (UniqueName: \"kubernetes.io/projected/cd947b0c-106a-45a0-95ae-5a7971e14e64-kube-api-access-tpxh6\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.943550 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-sb\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.945147 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-svc\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.945720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-nb\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.946784 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-config\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.950813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-combined-ca-bundle\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.951247 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.951750 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.952032 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data-custom\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.959339 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8078ce3-8385-4c32-8284-2b7a416a5413-logs\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.959962 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-swift-storage-0\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.965010 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64978c9b7d-d9wgb"] Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.966290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksdkv\" (UniqueName: \"kubernetes.io/projected/a8078ce3-8385-4c32-8284-2b7a416a5413-kube-api-access-ksdkv\") pod \"barbican-keystone-listener-84c78bb54-wv65z\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:26 crc kubenswrapper[4746]: I0129 16:56:26.966697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxh6\" (UniqueName: \"kubernetes.io/projected/cd947b0c-106a-45a0-95ae-5a7971e14e64-kube-api-access-tpxh6\") pod \"dnsmasq-dns-6554f656b5-5r4xd\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.009904 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79b795c958-xfcz9"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.023421 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79b795c958-xfcz9"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.023556 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.028701 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.038075 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047119 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047211 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c530fa14-8291-45d6-800c-54fd9716fa1d-logs\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047256 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data-custom\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047283 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data-custom\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047313 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-combined-ca-bundle\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047358 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-combined-ca-bundle\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047476 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d23b1-5d41-40a9-88ee-23a039de0ed7-logs\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047527 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047618 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtsv\" (UniqueName: \"kubernetes.io/projected/c530fa14-8291-45d6-800c-54fd9716fa1d-kube-api-access-nvtsv\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.047736 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gsq9\" (UniqueName: \"kubernetes.io/projected/f19d23b1-5d41-40a9-88ee-23a039de0ed7-kube-api-access-9gsq9\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.050335 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d23b1-5d41-40a9-88ee-23a039de0ed7-logs\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.052660 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.064551 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-combined-ca-bundle\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.065230 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data-custom\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.069101 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.093205 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gsq9\" (UniqueName: \"kubernetes.io/projected/f19d23b1-5d41-40a9-88ee-23a039de0ed7-kube-api-access-9gsq9\") pod \"barbican-worker-fd8d7b7c5-2bjng\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.125979 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.149823 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c530fa14-8291-45d6-800c-54fd9716fa1d-logs\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150160 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data-custom\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data-custom\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150227 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-combined-ca-bundle\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-combined-ca-bundle\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150326 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150418 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtsv\" (UniqueName: \"kubernetes.io/projected/c530fa14-8291-45d6-800c-54fd9716fa1d-kube-api-access-nvtsv\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150435 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b2d6a7-d21f-451d-97b5-b38aef1efccf-logs\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.150496 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49gkb\" (UniqueName: \"kubernetes.io/projected/01b2d6a7-d21f-451d-97b5-b38aef1efccf-kube-api-access-49gkb\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.151544 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c530fa14-8291-45d6-800c-54fd9716fa1d-logs\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.160569 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data-custom\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.166455 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-combined-ca-bundle\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.166614 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.181786 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtsv\" (UniqueName: \"kubernetes.io/projected/c530fa14-8291-45d6-800c-54fd9716fa1d-kube-api-access-nvtsv\") pod \"barbican-keystone-listener-64978c9b7d-d9wgb\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.253054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data-custom\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.253121 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-combined-ca-bundle\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.253238 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.253300 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b2d6a7-d21f-451d-97b5-b38aef1efccf-logs\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.253358 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49gkb\" (UniqueName: \"kubernetes.io/projected/01b2d6a7-d21f-451d-97b5-b38aef1efccf-kube-api-access-49gkb\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.259681 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b2d6a7-d21f-451d-97b5-b38aef1efccf-logs\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.259871 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data-custom\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.260483 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-combined-ca-bundle\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.265334 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.283595 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49gkb\" (UniqueName: \"kubernetes.io/projected/01b2d6a7-d21f-451d-97b5-b38aef1efccf-kube-api-access-49gkb\") pod \"barbican-api-79b795c958-xfcz9\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.346216 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:27 crc kubenswrapper[4746]: W0129 16:56:27.350012 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4041b083_1378_4515_8a9d_82219087e52a.slice/crio-b09532bae5b312d7f54fb5230cf262d8aa74c7ac61ec010d3171880206aa91c8 WatchSource:0}: Error finding container b09532bae5b312d7f54fb5230cf262d8aa74c7ac61ec010d3171880206aa91c8: Status 404 returned error can't find the container with id b09532bae5b312d7f54fb5230cf262d8aa74c7ac61ec010d3171880206aa91c8 Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.460661 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.476969 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.554004 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fb46889d-2pzb6"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.565691 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-668b7c6465-kf65r"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.595841 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f4c9c876f-dbjbj"] Jan 29 16:56:27 crc kubenswrapper[4746]: W0129 16:56:27.596835 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb54f40_e963_4c6c_9a9c_03195655c57b.slice/crio-ea3269840757664c2d1c7ae09a86661364f3ec5b25e30cb5472fb6c00c1b3c80 WatchSource:0}: Error finding container ea3269840757664c2d1c7ae09a86661364f3ec5b25e30cb5472fb6c00c1b3c80: Status 404 returned error can't find the container with id ea3269840757664c2d1c7ae09a86661364f3ec5b25e30cb5472fb6c00c1b3c80 Jan 29 16:56:27 crc kubenswrapper[4746]: W0129 16:56:27.600357 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0621197e_b84a_4a66_bab3_ad9f4562c8f2.slice/crio-51a2cedf44a7a3104b0b1427ecc66a3cad1ba46a79ff3eb92d45fb5b06cce4fb WatchSource:0}: Error finding container 51a2cedf44a7a3104b0b1427ecc66a3cad1ba46a79ff3eb92d45fb5b06cce4fb: Status 404 returned error can't find the container with id 51a2cedf44a7a3104b0b1427ecc66a3cad1ba46a79ff3eb92d45fb5b06cce4fb Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.732396 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-5r4xd"] Jan 29 16:56:27 crc kubenswrapper[4746]: W0129 16:56:27.734454 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd947b0c_106a_45a0_95ae_5a7971e14e64.slice/crio-b9f4b28ffb679b20c4c1d2ac62b494bb0d714fb8d2fbf600bc3a62ac65046a1f WatchSource:0}: Error finding container b9f4b28ffb679b20c4c1d2ac62b494bb0d714fb8d2fbf600bc3a62ac65046a1f: Status 404 returned error can't find the container with id b9f4b28ffb679b20c4c1d2ac62b494bb0d714fb8d2fbf600bc3a62ac65046a1f Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.811292 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84c78bb54-wv65z"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.866944 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-fd8d7b7c5-2bjng"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.988884 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64978c9b7d-d9wgb"] Jan 29 16:56:27 crc kubenswrapper[4746]: I0129 16:56:27.998094 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79b795c958-xfcz9"] Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.256861 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79b795c958-xfcz9" event={"ID":"01b2d6a7-d21f-451d-97b5-b38aef1efccf","Type":"ContainerStarted","Data":"2f4b26fd9af224fa9786f5b3496dc9b27714b0a294aa5a2b64b6250f8433259a"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.258759 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb46889d-2pzb6" event={"ID":"adb54f40-e963-4c6c-9a9c-03195655c57b","Type":"ContainerStarted","Data":"ea3269840757664c2d1c7ae09a86661364f3ec5b25e30cb5472fb6c00c1b3c80"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.259899 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" event={"ID":"a8078ce3-8385-4c32-8284-2b7a416a5413","Type":"ContainerStarted","Data":"0484252b7d0e2efe76dc0bea378ea9b097174d935917e1ee98302cec90a1a83c"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.261144 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" event={"ID":"cd947b0c-106a-45a0-95ae-5a7971e14e64","Type":"ContainerStarted","Data":"b9f4b28ffb679b20c4c1d2ac62b494bb0d714fb8d2fbf600bc3a62ac65046a1f"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.263228 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f4c9c876f-dbjbj" event={"ID":"2d2a3529-662b-4eb6-aebd-c15e694cab4e","Type":"ContainerStarted","Data":"6dca297ae2dd008725aa87fdc754f211aa331cee07a88050dc891fc934a0ee29"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.264658 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668b7c6465-kf65r" event={"ID":"0621197e-b84a-4a66-bab3-ad9f4562c8f2","Type":"ContainerStarted","Data":"51a2cedf44a7a3104b0b1427ecc66a3cad1ba46a79ff3eb92d45fb5b06cce4fb"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.266091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerStarted","Data":"b09532bae5b312d7f54fb5230cf262d8aa74c7ac61ec010d3171880206aa91c8"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.267430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" event={"ID":"f19d23b1-5d41-40a9-88ee-23a039de0ed7","Type":"ContainerStarted","Data":"0a98efe44a1ee27a87394b20d9217e71dce3cd9b050ec278874d8ac9ca1f3676"} Jan 29 16:56:28 crc kubenswrapper[4746]: I0129 16:56:28.269340 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" event={"ID":"c530fa14-8291-45d6-800c-54fd9716fa1d","Type":"ContainerStarted","Data":"89730ddfe40b875f34a08fd2a9731c17ac94456d01411922fd29e512fde52519"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.298663 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79b795c958-xfcz9" event={"ID":"01b2d6a7-d21f-451d-97b5-b38aef1efccf","Type":"ContainerStarted","Data":"38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.307623 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerStarted","Data":"ef15be7999ce9cc599e5220e744bdd9feba848688e6c1a7e80e75d14f6423d44"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.323470 4746 generic.go:334] "Generic (PLEG): container finished" podID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerID="32689455729e0bb70dca5bed4975f76a72e5bf807ae82ea09c50c2a1d15bcc0b" exitCode=0 Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.323559 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" event={"ID":"cd947b0c-106a-45a0-95ae-5a7971e14e64","Type":"ContainerDied","Data":"32689455729e0bb70dca5bed4975f76a72e5bf807ae82ea09c50c2a1d15bcc0b"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.339518 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f4c9c876f-dbjbj" event={"ID":"2d2a3529-662b-4eb6-aebd-c15e694cab4e","Type":"ContainerStarted","Data":"7d91e45479b9bc92a37b60229bed29f47cbec6a2f001ef73702b8bf9cbd0a8be"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.339995 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.355522 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.355559 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.368447 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb46889d-2pzb6" event={"ID":"adb54f40-e963-4c6c-9a9c-03195655c57b","Type":"ContainerStarted","Data":"cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.368486 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb46889d-2pzb6" event={"ID":"adb54f40-e963-4c6c-9a9c-03195655c57b","Type":"ContainerStarted","Data":"af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8"} Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.406693 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f4c9c876f-dbjbj" podStartSLOduration=3.406671776 podStartE2EDuration="3.406671776s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:29.400377223 +0000 UTC m=+1311.800961877" watchObservedRunningTime="2026-01-29 16:56:29.406671776 +0000 UTC m=+1311.807256420" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.429502 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.429649 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.801607 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9b7cbf56d-9h4gg"] Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.804046 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.818036 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b7cbf56d-9h4gg"] Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.914963 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx48\" (UniqueName: \"kubernetes.io/projected/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-kube-api-access-7mx48\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.915182 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-public-tls-certs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.915250 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-internal-tls-certs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.915284 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-scripts\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.915363 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-config-data\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.915413 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-combined-ca-bundle\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:29 crc kubenswrapper[4746]: I0129 16:56:29.915436 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-logs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.016750 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-config-data\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.016809 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-combined-ca-bundle\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.016832 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-logs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.016880 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mx48\" (UniqueName: \"kubernetes.io/projected/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-kube-api-access-7mx48\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.016976 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-public-tls-certs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.017016 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-internal-tls-certs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.017045 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-scripts\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.017553 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-logs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.021543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-internal-tls-certs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.021864 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-public-tls-certs\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.021921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-config-data\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.022443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-scripts\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.023065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-combined-ca-bundle\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.042697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mx48\" (UniqueName: \"kubernetes.io/projected/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-kube-api-access-7mx48\") pod \"placement-9b7cbf56d-9h4gg\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.122816 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.195827 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-654869dd86-s9th4"] Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.197733 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.205165 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.205631 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.212734 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-654869dd86-s9th4"] Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323013 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-internal-tls-certs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323518 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data-custom\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323562 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-combined-ca-bundle\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33cf45d3-8c95-4453-9a1e-46ad14bce822-logs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323665 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9w8g\" (UniqueName: \"kubernetes.io/projected/33cf45d3-8c95-4453-9a1e-46ad14bce822-kube-api-access-x9w8g\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323707 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.323733 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-public-tls-certs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.379304 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.379365 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.425739 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33cf45d3-8c95-4453-9a1e-46ad14bce822-logs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.425798 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9w8g\" (UniqueName: \"kubernetes.io/projected/33cf45d3-8c95-4453-9a1e-46ad14bce822-kube-api-access-x9w8g\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.425846 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.425874 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-public-tls-certs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.426016 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-internal-tls-certs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.426042 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data-custom\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.426074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-combined-ca-bundle\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.428463 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33cf45d3-8c95-4453-9a1e-46ad14bce822-logs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.431739 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.431810 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.434740 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-public-tls-certs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.436868 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.439101 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data-custom\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.441026 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-internal-tls-certs\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.449334 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-combined-ca-bundle\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.454240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9w8g\" (UniqueName: \"kubernetes.io/projected/33cf45d3-8c95-4453-9a1e-46ad14bce822-kube-api-access-x9w8g\") pod \"barbican-api-654869dd86-s9th4\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.491602 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.497874 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.546109 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:30 crc kubenswrapper[4746]: I0129 16:56:30.646930 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b7cbf56d-9h4gg"] Jan 29 16:56:31 crc kubenswrapper[4746]: I0129 16:56:31.065102 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-654869dd86-s9th4"] Jan 29 16:56:31 crc kubenswrapper[4746]: W0129 16:56:31.070396 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33cf45d3_8c95_4453_9a1e_46ad14bce822.slice/crio-4aa80619b2a8a1702d85e7b84d3ace10e4a40f4cae664150889194b0b26b5325 WatchSource:0}: Error finding container 4aa80619b2a8a1702d85e7b84d3ace10e4a40f4cae664150889194b0b26b5325: Status 404 returned error can't find the container with id 4aa80619b2a8a1702d85e7b84d3ace10e4a40f4cae664150889194b0b26b5325 Jan 29 16:56:31 crc kubenswrapper[4746]: I0129 16:56:31.390443 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654869dd86-s9th4" event={"ID":"33cf45d3-8c95-4453-9a1e-46ad14bce822","Type":"ContainerStarted","Data":"4aa80619b2a8a1702d85e7b84d3ace10e4a40f4cae664150889194b0b26b5325"} Jan 29 16:56:31 crc kubenswrapper[4746]: I0129 16:56:31.392015 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7cbf56d-9h4gg" event={"ID":"9db12a59-b8e4-43e4-add4-9cb361cfe6c5","Type":"ContainerStarted","Data":"d910aff6b85b1c406735298c4da2d7c5f8e6eb9d7aff652e523488a0dfe5e25b"} Jan 29 16:56:31 crc kubenswrapper[4746]: I0129 16:56:31.392495 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:31 crc kubenswrapper[4746]: I0129 16:56:31.392541 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.412888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79b795c958-xfcz9" event={"ID":"01b2d6a7-d21f-451d-97b5-b38aef1efccf","Type":"ContainerStarted","Data":"908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f"} Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.413210 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.413229 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.423832 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" event={"ID":"cd947b0c-106a-45a0-95ae-5a7971e14e64","Type":"ContainerStarted","Data":"336e557a58918bd33ef8f9cef0d616bcefaf87b5e58a4de4db8fae23e13dbe62"} Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.425605 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.436025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654869dd86-s9th4" event={"ID":"33cf45d3-8c95-4453-9a1e-46ad14bce822","Type":"ContainerStarted","Data":"bda6ef59fe6c6aa36650accec8c47f1fb248a6d1176f6aed98b5503facb4cdb6"} Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.436069 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.436165 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.464827 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79b795c958-xfcz9" podStartSLOduration=6.464801882 podStartE2EDuration="6.464801882s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:32.44870586 +0000 UTC m=+1314.849290514" watchObservedRunningTime="2026-01-29 16:56:32.464801882 +0000 UTC m=+1314.865386526" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.478649 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" podStartSLOduration=6.4786131000000005 podStartE2EDuration="6.4786131s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:32.471181836 +0000 UTC m=+1314.871766480" watchObservedRunningTime="2026-01-29 16:56:32.4786131 +0000 UTC m=+1314.879197744" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.500611 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7fb46889d-2pzb6" podStartSLOduration=6.5005905120000005 podStartE2EDuration="6.500590512s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:32.49433887 +0000 UTC m=+1314.894923514" watchObservedRunningTime="2026-01-29 16:56:32.500590512 +0000 UTC m=+1314.901175156" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.540171 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.540329 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:56:32 crc kubenswrapper[4746]: I0129 16:56:32.618034 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:56:33 crc kubenswrapper[4746]: I0129 16:56:33.449426 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7cbf56d-9h4gg" event={"ID":"9db12a59-b8e4-43e4-add4-9cb361cfe6c5","Type":"ContainerStarted","Data":"fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7"} Jan 29 16:56:33 crc kubenswrapper[4746]: I0129 16:56:33.566795 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:33 crc kubenswrapper[4746]: I0129 16:56:33.566940 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:56:33 crc kubenswrapper[4746]: I0129 16:56:33.572303 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:56:34 crc kubenswrapper[4746]: I0129 16:56:34.485177 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7cbf56d-9h4gg" event={"ID":"9db12a59-b8e4-43e4-add4-9cb361cfe6c5","Type":"ContainerStarted","Data":"08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718"} Jan 29 16:56:34 crc kubenswrapper[4746]: I0129 16:56:34.487345 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:34 crc kubenswrapper[4746]: I0129 16:56:34.487841 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:56:34 crc kubenswrapper[4746]: I0129 16:56:34.507370 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9b7cbf56d-9h4gg" podStartSLOduration=5.5073504490000005 podStartE2EDuration="5.507350449s" podCreationTimestamp="2026-01-29 16:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:34.50594831 +0000 UTC m=+1316.906532954" watchObservedRunningTime="2026-01-29 16:56:34.507350449 +0000 UTC m=+1316.907935093" Jan 29 16:56:34 crc kubenswrapper[4746]: I0129 16:56:34.945832 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:34 crc kubenswrapper[4746]: I0129 16:56:34.947891 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.502871 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" event={"ID":"a8078ce3-8385-4c32-8284-2b7a416a5413","Type":"ContainerStarted","Data":"fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.503230 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" event={"ID":"a8078ce3-8385-4c32-8284-2b7a416a5413","Type":"ContainerStarted","Data":"3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.512105 4746 generic.go:334] "Generic (PLEG): container finished" podID="ed0634b6-22e2-4042-a738-45efb60d6c87" containerID="0eca1ecffcb158ea772d21b15b7870aa8539dd40c4ec7be1285ce85180bf1e8a" exitCode=0 Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.512167 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w69t8" event={"ID":"ed0634b6-22e2-4042-a738-45efb60d6c87","Type":"ContainerDied","Data":"0eca1ecffcb158ea772d21b15b7870aa8539dd40c4ec7be1285ce85180bf1e8a"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.519131 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" event={"ID":"f19d23b1-5d41-40a9-88ee-23a039de0ed7","Type":"ContainerStarted","Data":"d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.519211 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" event={"ID":"f19d23b1-5d41-40a9-88ee-23a039de0ed7","Type":"ContainerStarted","Data":"d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.528816 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.530433 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" podStartSLOduration=3.184633692 podStartE2EDuration="9.530409282s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="2026-01-29 16:56:27.835172595 +0000 UTC m=+1310.235757239" lastFinishedPulling="2026-01-29 16:56:34.180948185 +0000 UTC m=+1316.581532829" observedRunningTime="2026-01-29 16:56:35.524730976 +0000 UTC m=+1317.925315620" watchObservedRunningTime="2026-01-29 16:56:35.530409282 +0000 UTC m=+1317.930993946" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.555755 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654869dd86-s9th4" event={"ID":"33cf45d3-8c95-4453-9a1e-46ad14bce822","Type":"ContainerStarted","Data":"c78b0cc4c733ab33d81ae04bcb4447f430f04b3f564487ae982eadf1d345566d"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.557180 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.557231 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.576389 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" podStartSLOduration=3.163324697 podStartE2EDuration="9.57634633s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="2026-01-29 16:56:27.888125576 +0000 UTC m=+1310.288710210" lastFinishedPulling="2026-01-29 16:56:34.301147199 +0000 UTC m=+1316.701731843" observedRunningTime="2026-01-29 16:56:35.538572746 +0000 UTC m=+1317.939157390" watchObservedRunningTime="2026-01-29 16:56:35.57634633 +0000 UTC m=+1317.976930974" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.588741 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" event={"ID":"c530fa14-8291-45d6-800c-54fd9716fa1d","Type":"ContainerStarted","Data":"c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.599151 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668b7c6465-kf65r" event={"ID":"0621197e-b84a-4a66-bab3-ad9f4562c8f2","Type":"ContainerStarted","Data":"7bd30ee3393c37edbc77334f946961a30d18374e888613a82c710292d35611c4"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.599217 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668b7c6465-kf65r" event={"ID":"0621197e-b84a-4a66-bab3-ad9f4562c8f2","Type":"ContainerStarted","Data":"c86b35296ad7e7b5a16551c96842f47ff35605cd7bb73199b1d297ed5bcf9fb8"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.607982 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerStarted","Data":"c38ce77397c35f032a308e6b2261503b31f506468159ae23145c5707963e2e97"} Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.641304 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-668b7c6465-kf65r"] Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.652721 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-654869dd86-s9th4" podStartSLOduration=5.652703693 podStartE2EDuration="5.652703693s" podCreationTimestamp="2026-01-29 16:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:35.618205987 +0000 UTC m=+1318.018790641" watchObservedRunningTime="2026-01-29 16:56:35.652703693 +0000 UTC m=+1318.053288337" Jan 29 16:56:35 crc kubenswrapper[4746]: I0129 16:56:35.665324 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-668b7c6465-kf65r" podStartSLOduration=3.031105714 podStartE2EDuration="9.665305948s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="2026-01-29 16:56:27.604754091 +0000 UTC m=+1310.005338735" lastFinishedPulling="2026-01-29 16:56:34.238954325 +0000 UTC m=+1316.639538969" observedRunningTime="2026-01-29 16:56:35.645324251 +0000 UTC m=+1318.045908895" watchObservedRunningTime="2026-01-29 16:56:35.665305948 +0000 UTC m=+1318.065890592" Jan 29 16:56:35 crc kubenswrapper[4746]: E0129 16:56:35.933172 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:35 crc kubenswrapper[4746]: E0129 16:56:35.933432 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9zzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4041b083-1378-4515-8a9d-82219087e52a): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:35 crc kubenswrapper[4746]: E0129 16:56:35.934702 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="4041b083-1378-4515-8a9d-82219087e52a" Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.619349 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" event={"ID":"c530fa14-8291-45d6-800c-54fd9716fa1d","Type":"ContainerStarted","Data":"2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b"} Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.625319 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerStarted","Data":"973bff53ed5ce1a3a00aa156240964423f060e8120ee639385744ac62f7eedae"} Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.625507 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-central-agent" containerID="cri-o://ef15be7999ce9cc599e5220e744bdd9feba848688e6c1a7e80e75d14f6423d44" gracePeriod=30 Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.625623 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="sg-core" containerID="cri-o://973bff53ed5ce1a3a00aa156240964423f060e8120ee639385744ac62f7eedae" gracePeriod=30 Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.625688 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-notification-agent" containerID="cri-o://c38ce77397c35f032a308e6b2261503b31f506468159ae23145c5707963e2e97" gracePeriod=30 Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.631847 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ls92k" event={"ID":"5a81565e-25dc-4269-8e78-c953acef207b","Type":"ContainerStarted","Data":"1fcd0fc16e0dc4d896486171c419af575bafdec450638ee76d77646a35a6e962"} Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.645026 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" podStartSLOduration=4.023459737 podStartE2EDuration="10.645004703s" podCreationTimestamp="2026-01-29 16:56:26 +0000 UTC" firstStartedPulling="2026-01-29 16:56:28.001926065 +0000 UTC m=+1310.402510709" lastFinishedPulling="2026-01-29 16:56:34.623471031 +0000 UTC m=+1317.024055675" observedRunningTime="2026-01-29 16:56:36.644051157 +0000 UTC m=+1319.044635831" watchObservedRunningTime="2026-01-29 16:56:36.645004703 +0000 UTC m=+1319.045589347" Jan 29 16:56:36 crc kubenswrapper[4746]: I0129 16:56:36.696308 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84c78bb54-wv65z"] Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.054332 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.098437 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ls92k" podStartSLOduration=4.723767799 podStartE2EDuration="44.098420807s" podCreationTimestamp="2026-01-29 16:55:53 +0000 UTC" firstStartedPulling="2026-01-29 16:55:54.932266669 +0000 UTC m=+1277.332851313" lastFinishedPulling="2026-01-29 16:56:34.306919677 +0000 UTC m=+1316.707504321" observedRunningTime="2026-01-29 16:56:36.757880616 +0000 UTC m=+1319.158465260" watchObservedRunningTime="2026-01-29 16:56:37.098420807 +0000 UTC m=+1319.499005451" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.144534 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-m4rn8"] Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.144796 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerName="dnsmasq-dns" containerID="cri-o://4efa12f590ada733c93d1bb73bcf0260dc99cb00f8319a4619f1e025f4c73162" gracePeriod=10 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.439147 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w69t8" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.524526 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-combined-ca-bundle\") pod \"ed0634b6-22e2-4042-a738-45efb60d6c87\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.524968 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-config\") pod \"ed0634b6-22e2-4042-a738-45efb60d6c87\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.525114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sfwz\" (UniqueName: \"kubernetes.io/projected/ed0634b6-22e2-4042-a738-45efb60d6c87-kube-api-access-2sfwz\") pod \"ed0634b6-22e2-4042-a738-45efb60d6c87\" (UID: \"ed0634b6-22e2-4042-a738-45efb60d6c87\") " Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.536602 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0634b6-22e2-4042-a738-45efb60d6c87-kube-api-access-2sfwz" (OuterVolumeSpecName: "kube-api-access-2sfwz") pod "ed0634b6-22e2-4042-a738-45efb60d6c87" (UID: "ed0634b6-22e2-4042-a738-45efb60d6c87"). InnerVolumeSpecName "kube-api-access-2sfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.584889 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed0634b6-22e2-4042-a738-45efb60d6c87" (UID: "ed0634b6-22e2-4042-a738-45efb60d6c87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.630244 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.630268 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sfwz\" (UniqueName: \"kubernetes.io/projected/ed0634b6-22e2-4042-a738-45efb60d6c87-kube-api-access-2sfwz\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.639501 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-config" (OuterVolumeSpecName: "config") pod "ed0634b6-22e2-4042-a738-45efb60d6c87" (UID: "ed0634b6-22e2-4042-a738-45efb60d6c87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.713698 4746 generic.go:334] "Generic (PLEG): container finished" podID="4041b083-1378-4515-8a9d-82219087e52a" containerID="973bff53ed5ce1a3a00aa156240964423f060e8120ee639385744ac62f7eedae" exitCode=2 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.713730 4746 generic.go:334] "Generic (PLEG): container finished" podID="4041b083-1378-4515-8a9d-82219087e52a" containerID="c38ce77397c35f032a308e6b2261503b31f506468159ae23145c5707963e2e97" exitCode=0 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.713739 4746 generic.go:334] "Generic (PLEG): container finished" podID="4041b083-1378-4515-8a9d-82219087e52a" containerID="ef15be7999ce9cc599e5220e744bdd9feba848688e6c1a7e80e75d14f6423d44" exitCode=0 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.713800 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerDied","Data":"973bff53ed5ce1a3a00aa156240964423f060e8120ee639385744ac62f7eedae"} Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.713824 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerDied","Data":"c38ce77397c35f032a308e6b2261503b31f506468159ae23145c5707963e2e97"} Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.713836 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerDied","Data":"ef15be7999ce9cc599e5220e744bdd9feba848688e6c1a7e80e75d14f6423d44"} Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.739534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w69t8" event={"ID":"ed0634b6-22e2-4042-a738-45efb60d6c87","Type":"ContainerDied","Data":"203a8fa5f103eb76275cfefbdc5e77d3d5e12a9f1bc346b7d1743f8b17763838"} Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.739574 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203a8fa5f103eb76275cfefbdc5e77d3d5e12a9f1bc346b7d1743f8b17763838" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.739646 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w69t8" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.751526 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed0634b6-22e2-4042-a738-45efb60d6c87-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.765968 4746 generic.go:334] "Generic (PLEG): container finished" podID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerID="4efa12f590ada733c93d1bb73bcf0260dc99cb00f8319a4619f1e025f4c73162" exitCode=0 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.766010 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" event={"ID":"c3c163ba-2cf5-4100-a7d5-4fffc157a73a","Type":"ContainerDied","Data":"4efa12f590ada733c93d1bb73bcf0260dc99cb00f8319a4619f1e025f4c73162"} Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.766169 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener-log" containerID="cri-o://3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e" gracePeriod=30 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.768475 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener" containerID="cri-o://fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff" gracePeriod=30 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.773919 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-668b7c6465-kf65r" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker-log" containerID="cri-o://c86b35296ad7e7b5a16551c96842f47ff35605cd7bb73199b1d297ed5bcf9fb8" gracePeriod=30 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.774428 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-668b7c6465-kf65r" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker" containerID="cri-o://7bd30ee3393c37edbc77334f946961a30d18374e888613a82c710292d35611c4" gracePeriod=30 Jan 29 16:56:37 crc kubenswrapper[4746]: I0129 16:56:37.786957 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.851836 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-9qwdx"] Jan 29 16:56:38 crc kubenswrapper[4746]: E0129 16:56:37.852277 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0634b6-22e2-4042-a738-45efb60d6c87" containerName="neutron-db-sync" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852289 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0634b6-22e2-4042-a738-45efb60d6c87" containerName="neutron-db-sync" Jan 29 16:56:38 crc kubenswrapper[4746]: E0129 16:56:37.852298 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerName="init" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852306 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerName="init" Jan 29 16:56:38 crc kubenswrapper[4746]: E0129 16:56:37.852319 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerName="dnsmasq-dns" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852325 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerName="dnsmasq-dns" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852544 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" containerName="dnsmasq-dns" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852563 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0634b6-22e2-4042-a738-45efb60d6c87" containerName="neutron-db-sync" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852828 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-sb\") pod \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.852887 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-swift-storage-0\") pod \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.853077 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-nb\") pod \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.853111 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j56tq\" (UniqueName: \"kubernetes.io/projected/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-kube-api-access-j56tq\") pod \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.853141 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-config\") pod \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.853227 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-svc\") pod \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\" (UID: \"c3c163ba-2cf5-4100-a7d5-4fffc157a73a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.853669 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.878221 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-kube-api-access-j56tq" (OuterVolumeSpecName: "kube-api-access-j56tq") pod "c3c163ba-2cf5-4100-a7d5-4fffc157a73a" (UID: "c3c163ba-2cf5-4100-a7d5-4fffc157a73a"). InnerVolumeSpecName "kube-api-access-j56tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.945563 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-9qwdx"] Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956060 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956107 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5r9\" (UniqueName: \"kubernetes.io/projected/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-kube-api-access-mx5r9\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956164 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956239 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956286 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-config\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956301 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.956559 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j56tq\" (UniqueName: \"kubernetes.io/projected/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-kube-api-access-j56tq\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:37.975106 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-config" (OuterVolumeSpecName: "config") pod "c3c163ba-2cf5-4100-a7d5-4fffc157a73a" (UID: "c3c163ba-2cf5-4100-a7d5-4fffc157a73a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.004856 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3c163ba-2cf5-4100-a7d5-4fffc157a73a" (UID: "c3c163ba-2cf5-4100-a7d5-4fffc157a73a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.007410 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3c163ba-2cf5-4100-a7d5-4fffc157a73a" (UID: "c3c163ba-2cf5-4100-a7d5-4fffc157a73a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.024801 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3c163ba-2cf5-4100-a7d5-4fffc157a73a" (UID: "c3c163ba-2cf5-4100-a7d5-4fffc157a73a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.052540 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3c163ba-2cf5-4100-a7d5-4fffc157a73a" (UID: "c3c163ba-2cf5-4100-a7d5-4fffc157a73a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.072511 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58cbb9c4c4-qq49j"] Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073361 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073419 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-config\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073437 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073518 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx5r9\" (UniqueName: \"kubernetes.io/projected/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-kube-api-access-mx5r9\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073575 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073627 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073639 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073646 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073654 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.073662 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3c163ba-2cf5-4100-a7d5-4fffc157a73a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.074001 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.074604 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.074844 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.075756 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.076101 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-config\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.076485 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.078517 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.078773 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-h4x88" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.078956 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.079067 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.088037 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58cbb9c4c4-qq49j"] Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.105932 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx5r9\" (UniqueName: \"kubernetes.io/projected/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-kube-api-access-mx5r9\") pod \"dnsmasq-dns-7bdf86f46f-9qwdx\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.175829 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-combined-ca-bundle\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.175912 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-ovndb-tls-certs\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.175933 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-httpd-config\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.176037 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtl5c\" (UniqueName: \"kubernetes.io/projected/7436ae82-3679-4ddd-bf25-ab3a104a8395-kube-api-access-jtl5c\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.176076 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-config\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.277774 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-ovndb-tls-certs\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.277827 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-httpd-config\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.277919 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtl5c\" (UniqueName: \"kubernetes.io/projected/7436ae82-3679-4ddd-bf25-ab3a104a8395-kube-api-access-jtl5c\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.277970 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-config\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.278261 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-combined-ca-bundle\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.286536 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-httpd-config\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.287153 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-ovndb-tls-certs\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.295266 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-config\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.307012 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtl5c\" (UniqueName: \"kubernetes.io/projected/7436ae82-3679-4ddd-bf25-ab3a104a8395-kube-api-access-jtl5c\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.310053 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-combined-ca-bundle\") pod \"neutron-58cbb9c4c4-qq49j\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.416646 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.437388 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:38 crc kubenswrapper[4746]: E0129 16:56:38.445557 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0634b6_22e2_4042_a738_45efb60d6c87.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0634b6_22e2_4042_a738_45efb60d6c87.slice/crio-203a8fa5f103eb76275cfefbdc5e77d3d5e12a9f1bc346b7d1743f8b17763838\": RecentStats: unable to find data in memory cache]" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.586433 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-sg-core-conf-yaml\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.586799 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9zzb\" (UniqueName: \"kubernetes.io/projected/4041b083-1378-4515-8a9d-82219087e52a-kube-api-access-m9zzb\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.586843 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-scripts\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.586876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-combined-ca-bundle\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.586912 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-run-httpd\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.587058 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-config-data\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.587107 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-log-httpd\") pod \"4041b083-1378-4515-8a9d-82219087e52a\" (UID: \"4041b083-1378-4515-8a9d-82219087e52a\") " Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.589626 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.590929 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.600662 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-scripts" (OuterVolumeSpecName: "scripts") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.603421 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4041b083-1378-4515-8a9d-82219087e52a-kube-api-access-m9zzb" (OuterVolumeSpecName: "kube-api-access-m9zzb") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "kube-api-access-m9zzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.658450 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.675537 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-config-data" (OuterVolumeSpecName: "config-data") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.677590 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-h4x88" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694214 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694253 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694265 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694277 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9zzb\" (UniqueName: \"kubernetes.io/projected/4041b083-1378-4515-8a9d-82219087e52a-kube-api-access-m9zzb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694290 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694300 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4041b083-1378-4515-8a9d-82219087e52a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.694436 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.727489 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4041b083-1378-4515-8a9d-82219087e52a" (UID: "4041b083-1378-4515-8a9d-82219087e52a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.805734 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4041b083-1378-4515-8a9d-82219087e52a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.834132 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" event={"ID":"c3c163ba-2cf5-4100-a7d5-4fffc157a73a","Type":"ContainerDied","Data":"6cadfe3522a883b73ace2837fab4fe32f0d7b4744796ad92c78cdbad1e05cbd4"} Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.834205 4746 scope.go:117] "RemoveContainer" containerID="4efa12f590ada733c93d1bb73bcf0260dc99cb00f8319a4619f1e025f4c73162" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.834381 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-m4rn8" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.849877 4746 generic.go:334] "Generic (PLEG): container finished" podID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerID="7bd30ee3393c37edbc77334f946961a30d18374e888613a82c710292d35611c4" exitCode=0 Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.849906 4746 generic.go:334] "Generic (PLEG): container finished" podID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerID="c86b35296ad7e7b5a16551c96842f47ff35605cd7bb73199b1d297ed5bcf9fb8" exitCode=143 Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.849949 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668b7c6465-kf65r" event={"ID":"0621197e-b84a-4a66-bab3-ad9f4562c8f2","Type":"ContainerDied","Data":"7bd30ee3393c37edbc77334f946961a30d18374e888613a82c710292d35611c4"} Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.849975 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668b7c6465-kf65r" event={"ID":"0621197e-b84a-4a66-bab3-ad9f4562c8f2","Type":"ContainerDied","Data":"c86b35296ad7e7b5a16551c96842f47ff35605cd7bb73199b1d297ed5bcf9fb8"} Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.858325 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4041b083-1378-4515-8a9d-82219087e52a","Type":"ContainerDied","Data":"b09532bae5b312d7f54fb5230cf262d8aa74c7ac61ec010d3171880206aa91c8"} Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.858458 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.880892 4746 scope.go:117] "RemoveContainer" containerID="10d5fcb690d71667f468afdd2d5dae0076eebf13fdc60a095952f53c082b9447" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.883344 4746 generic.go:334] "Generic (PLEG): container finished" podID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerID="3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e" exitCode=143 Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.883389 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" event={"ID":"a8078ce3-8385-4c32-8284-2b7a416a5413","Type":"ContainerDied","Data":"3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e"} Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.890383 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-m4rn8"] Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.896892 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-m4rn8"] Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.956354 4746 scope.go:117] "RemoveContainer" containerID="973bff53ed5ce1a3a00aa156240964423f060e8120ee639385744ac62f7eedae" Jan 29 16:56:38 crc kubenswrapper[4746]: I0129 16:56:38.984035 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.030878 4746 scope.go:117] "RemoveContainer" containerID="c38ce77397c35f032a308e6b2261503b31f506468159ae23145c5707963e2e97" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.031019 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.048557 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:39 crc kubenswrapper[4746]: E0129 16:56:39.049001 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-central-agent" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.049014 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-central-agent" Jan 29 16:56:39 crc kubenswrapper[4746]: E0129 16:56:39.049026 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="sg-core" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.049032 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="sg-core" Jan 29 16:56:39 crc kubenswrapper[4746]: E0129 16:56:39.049069 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-notification-agent" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.049075 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-notification-agent" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.049269 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="sg-core" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.049284 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-notification-agent" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.049348 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4041b083-1378-4515-8a9d-82219087e52a" containerName="ceilometer-central-agent" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.055619 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.065678 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.067134 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.084999 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.093596 4746 scope.go:117] "RemoveContainer" containerID="ef15be7999ce9cc599e5220e744bdd9feba848688e6c1a7e80e75d14f6423d44" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.176863 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-9qwdx"] Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.218337 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-run-httpd\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.218460 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.218645 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.221237 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-log-httpd\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.222875 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pspgj\" (UniqueName: \"kubernetes.io/projected/e80703aa-8645-4cdb-8c1b-5511ef93bc83-kube-api-access-pspgj\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.223141 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-config-data\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.223475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-scripts\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-scripts\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325568 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-run-httpd\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325635 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325721 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325794 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-log-httpd\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325878 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pspgj\" (UniqueName: \"kubernetes.io/projected/e80703aa-8645-4cdb-8c1b-5511ef93bc83-kube-api-access-pspgj\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.325941 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-config-data\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.327895 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-run-httpd\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.328328 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-log-httpd\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.331339 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-config-data\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.339658 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.339738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-scripts\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.340307 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.346338 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pspgj\" (UniqueName: \"kubernetes.io/projected/e80703aa-8645-4cdb-8c1b-5511ef93bc83-kube-api-access-pspgj\") pod \"ceilometer-0\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.387968 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.497761 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.613398 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.624436 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.625491 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58cbb9c4c4-qq49j"] Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.704055 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.732272 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmfwh\" (UniqueName: \"kubernetes.io/projected/0621197e-b84a-4a66-bab3-ad9f4562c8f2-kube-api-access-dmfwh\") pod \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.732661 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-combined-ca-bundle\") pod \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.732721 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data\") pod \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.732889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data-custom\") pod \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.732979 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0621197e-b84a-4a66-bab3-ad9f4562c8f2-logs\") pod \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\" (UID: \"0621197e-b84a-4a66-bab3-ad9f4562c8f2\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.746471 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0621197e-b84a-4a66-bab3-ad9f4562c8f2-logs" (OuterVolumeSpecName: "logs") pod "0621197e-b84a-4a66-bab3-ad9f4562c8f2" (UID: "0621197e-b84a-4a66-bab3-ad9f4562c8f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.749566 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0621197e-b84a-4a66-bab3-ad9f4562c8f2" (UID: "0621197e-b84a-4a66-bab3-ad9f4562c8f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.752348 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0621197e-b84a-4a66-bab3-ad9f4562c8f2-kube-api-access-dmfwh" (OuterVolumeSpecName: "kube-api-access-dmfwh") pod "0621197e-b84a-4a66-bab3-ad9f4562c8f2" (UID: "0621197e-b84a-4a66-bab3-ad9f4562c8f2"). InnerVolumeSpecName "kube-api-access-dmfwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.793324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0621197e-b84a-4a66-bab3-ad9f4562c8f2" (UID: "0621197e-b84a-4a66-bab3-ad9f4562c8f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.809416 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data" (OuterVolumeSpecName: "config-data") pod "0621197e-b84a-4a66-bab3-ad9f4562c8f2" (UID: "0621197e-b84a-4a66-bab3-ad9f4562c8f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.838863 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data\") pod \"a8078ce3-8385-4c32-8284-2b7a416a5413\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.838926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-combined-ca-bundle\") pod \"a8078ce3-8385-4c32-8284-2b7a416a5413\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.838973 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksdkv\" (UniqueName: \"kubernetes.io/projected/a8078ce3-8385-4c32-8284-2b7a416a5413-kube-api-access-ksdkv\") pod \"a8078ce3-8385-4c32-8284-2b7a416a5413\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839015 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data-custom\") pod \"a8078ce3-8385-4c32-8284-2b7a416a5413\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839055 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8078ce3-8385-4c32-8284-2b7a416a5413-logs\") pod \"a8078ce3-8385-4c32-8284-2b7a416a5413\" (UID: \"a8078ce3-8385-4c32-8284-2b7a416a5413\") " Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839462 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmfwh\" (UniqueName: \"kubernetes.io/projected/0621197e-b84a-4a66-bab3-ad9f4562c8f2-kube-api-access-dmfwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839474 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839483 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839494 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0621197e-b84a-4a66-bab3-ad9f4562c8f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839504 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0621197e-b84a-4a66-bab3-ad9f4562c8f2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.839770 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8078ce3-8385-4c32-8284-2b7a416a5413-logs" (OuterVolumeSpecName: "logs") pod "a8078ce3-8385-4c32-8284-2b7a416a5413" (UID: "a8078ce3-8385-4c32-8284-2b7a416a5413"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.843507 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8078ce3-8385-4c32-8284-2b7a416a5413-kube-api-access-ksdkv" (OuterVolumeSpecName: "kube-api-access-ksdkv") pod "a8078ce3-8385-4c32-8284-2b7a416a5413" (UID: "a8078ce3-8385-4c32-8284-2b7a416a5413"). InnerVolumeSpecName "kube-api-access-ksdkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.861354 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8078ce3-8385-4c32-8284-2b7a416a5413" (UID: "a8078ce3-8385-4c32-8284-2b7a416a5413"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.887344 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8078ce3-8385-4c32-8284-2b7a416a5413" (UID: "a8078ce3-8385-4c32-8284-2b7a416a5413"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.930647 4746 generic.go:334] "Generic (PLEG): container finished" podID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerID="5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f" exitCode=0 Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.930731 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" event={"ID":"e698edd5-b718-4b17-bb7c-eccfb6d23d5e","Type":"ContainerDied","Data":"5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f"} Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.930758 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" event={"ID":"e698edd5-b718-4b17-bb7c-eccfb6d23d5e","Type":"ContainerStarted","Data":"a8aba5e57d08674853625d2072341fb2c993bbf4c02d295289b08c61ee3782a3"} Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.944895 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.944933 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksdkv\" (UniqueName: \"kubernetes.io/projected/a8078ce3-8385-4c32-8284-2b7a416a5413-kube-api-access-ksdkv\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.944945 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.944954 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8078ce3-8385-4c32-8284-2b7a416a5413-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.981450 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-668b7c6465-kf65r" event={"ID":"0621197e-b84a-4a66-bab3-ad9f4562c8f2","Type":"ContainerDied","Data":"51a2cedf44a7a3104b0b1427ecc66a3cad1ba46a79ff3eb92d45fb5b06cce4fb"} Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.981511 4746 scope.go:117] "RemoveContainer" containerID="7bd30ee3393c37edbc77334f946961a30d18374e888613a82c710292d35611c4" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.981676 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-668b7c6465-kf65r" Jan 29 16:56:39 crc kubenswrapper[4746]: I0129 16:56:39.998491 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbb9c4c4-qq49j" event={"ID":"7436ae82-3679-4ddd-bf25-ab3a104a8395","Type":"ContainerStarted","Data":"88299231264c8328e373528f7da766ee3e2c1dda5aff04d209be144aeaf3fe42"} Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.014346 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data" (OuterVolumeSpecName: "config-data") pod "a8078ce3-8385-4c32-8284-2b7a416a5413" (UID: "a8078ce3-8385-4c32-8284-2b7a416a5413"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.017000 4746 generic.go:334] "Generic (PLEG): container finished" podID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerID="fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff" exitCode=0 Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.017055 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" event={"ID":"a8078ce3-8385-4c32-8284-2b7a416a5413","Type":"ContainerDied","Data":"fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff"} Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.017078 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" event={"ID":"a8078ce3-8385-4c32-8284-2b7a416a5413","Type":"ContainerDied","Data":"0484252b7d0e2efe76dc0bea378ea9b097174d935917e1ee98302cec90a1a83c"} Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.017127 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78bb54-wv65z" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.022704 4746 scope.go:117] "RemoveContainer" containerID="c86b35296ad7e7b5a16551c96842f47ff35605cd7bb73199b1d297ed5bcf9fb8" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.049947 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8078ce3-8385-4c32-8284-2b7a416a5413-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.075482 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.148881 4746 scope.go:117] "RemoveContainer" containerID="fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.164012 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-668b7c6465-kf65r"] Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.233648 4746 scope.go:117] "RemoveContainer" containerID="3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.259824 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-668b7c6465-kf65r"] Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.262240 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84c78bb54-wv65z"] Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.270002 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-84c78bb54-wv65z"] Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.355560 4746 scope.go:117] "RemoveContainer" containerID="fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff" Jan 29 16:56:40 crc kubenswrapper[4746]: E0129 16:56:40.356591 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff\": container with ID starting with fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff not found: ID does not exist" containerID="fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.356699 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff"} err="failed to get container status \"fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff\": rpc error: code = NotFound desc = could not find container \"fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff\": container with ID starting with fa63bb7083ae328c7f61b47b86c5fee80d82e0a9eb937457cb2a73c038e22eff not found: ID does not exist" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.356791 4746 scope.go:117] "RemoveContainer" containerID="3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e" Jan 29 16:56:40 crc kubenswrapper[4746]: E0129 16:56:40.358411 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e\": container with ID starting with 3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e not found: ID does not exist" containerID="3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.358526 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e"} err="failed to get container status \"3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e\": rpc error: code = NotFound desc = could not find container \"3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e\": container with ID starting with 3e51033c871917941291c0f8149654ad94e624b70ec60af2e8805a57adf30b1e not found: ID does not exist" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.460122 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" path="/var/lib/kubelet/pods/0621197e-b84a-4a66-bab3-ad9f4562c8f2/volumes" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.461028 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4041b083-1378-4515-8a9d-82219087e52a" path="/var/lib/kubelet/pods/4041b083-1378-4515-8a9d-82219087e52a/volumes" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.461929 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" path="/var/lib/kubelet/pods/a8078ce3-8385-4c32-8284-2b7a416a5413/volumes" Jan 29 16:56:40 crc kubenswrapper[4746]: I0129 16:56:40.463300 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3c163ba-2cf5-4100-a7d5-4fffc157a73a" path="/var/lib/kubelet/pods/c3c163ba-2cf5-4100-a7d5-4fffc157a73a/volumes" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.019486 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c4b578977-hfn59"] Jan 29 16:56:41 crc kubenswrapper[4746]: E0129 16:56:41.020131 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020147 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker" Jan 29 16:56:41 crc kubenswrapper[4746]: E0129 16:56:41.020173 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker-log" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020179 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker-log" Jan 29 16:56:41 crc kubenswrapper[4746]: E0129 16:56:41.020212 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020218 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener" Jan 29 16:56:41 crc kubenswrapper[4746]: E0129 16:56:41.020234 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener-log" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020239 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener-log" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020426 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener-log" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020439 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020453 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8078ce3-8385-4c32-8284-2b7a416a5413" containerName="barbican-keystone-listener" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.020469 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0621197e-b84a-4a66-bab3-ad9f4562c8f2" containerName="barbican-worker-log" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.021322 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.029353 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.029655 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.069784 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" event={"ID":"e698edd5-b718-4b17-bb7c-eccfb6d23d5e","Type":"ContainerStarted","Data":"709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7"} Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.070063 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.071244 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c4b578977-hfn59"] Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.071444 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerStarted","Data":"8c9d994150da69a0f92e9938b86ea3e7f3e361c9b85f11d54c0c84e9d7a3e56f"} Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.071473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerStarted","Data":"dae91903ae0f2a2e16bfd0e73bf20943a279fab4d531b26fe6f37deebc8b1262"} Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.073577 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbb9c4c4-qq49j" event={"ID":"7436ae82-3679-4ddd-bf25-ab3a104a8395","Type":"ContainerStarted","Data":"7a8ce438daed6fbd897f527086cd302365accd9de795981da2cea196144576a1"} Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.073737 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbb9c4c4-qq49j" event={"ID":"7436ae82-3679-4ddd-bf25-ab3a104a8395","Type":"ContainerStarted","Data":"3c2347442a4f50c53e94d41ce0e41388371a9772c4198a7789aa0ba9fa1b690c"} Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.074315 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.106735 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" podStartSLOduration=4.106710858 podStartE2EDuration="4.106710858s" podCreationTimestamp="2026-01-29 16:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:41.096423685 +0000 UTC m=+1323.497008329" watchObservedRunningTime="2026-01-29 16:56:41.106710858 +0000 UTC m=+1323.507295502" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.127640 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58cbb9c4c4-qq49j" podStartSLOduration=3.12761718 podStartE2EDuration="3.12761718s" podCreationTimestamp="2026-01-29 16:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:41.120488994 +0000 UTC m=+1323.521073638" watchObservedRunningTime="2026-01-29 16:56:41.12761718 +0000 UTC m=+1323.528201824" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179579 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-internal-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179654 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-config\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179711 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-public-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179804 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85km\" (UniqueName: \"kubernetes.io/projected/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-kube-api-access-q85km\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179870 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-ovndb-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179905 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-httpd-config\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.179942 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-combined-ca-bundle\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.281846 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-internal-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.282786 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-config\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.282827 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-public-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.282872 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85km\" (UniqueName: \"kubernetes.io/projected/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-kube-api-access-q85km\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.282907 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-ovndb-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.282932 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-httpd-config\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.282956 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-combined-ca-bundle\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.288839 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-internal-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.289662 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-combined-ca-bundle\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.291952 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-public-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.293907 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-httpd-config\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.296474 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-config\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.297704 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-ovndb-tls-certs\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.304942 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85km\" (UniqueName: \"kubernetes.io/projected/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-kube-api-access-q85km\") pod \"neutron-5c4b578977-hfn59\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.353917 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.617291 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.717020 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79b795c958-xfcz9"] Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.717582 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79b795c958-xfcz9" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api-log" containerID="cri-o://38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa" gracePeriod=30 Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.717719 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79b795c958-xfcz9" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api" containerID="cri-o://908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f" gracePeriod=30 Jan 29 16:56:41 crc kubenswrapper[4746]: I0129 16:56:41.735639 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-79b795c958-xfcz9" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Jan 29 16:56:42 crc kubenswrapper[4746]: I0129 16:56:42.097805 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c4b578977-hfn59"] Jan 29 16:56:42 crc kubenswrapper[4746]: W0129 16:56:42.101970 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda3e5e7d_45e7_4ee6_a400_bd00932ea1d6.slice/crio-8eb516ab6b5ac8c4484816892e0ffa113522b32dba4f06cfafecea6cf9b07400 WatchSource:0}: Error finding container 8eb516ab6b5ac8c4484816892e0ffa113522b32dba4f06cfafecea6cf9b07400: Status 404 returned error can't find the container with id 8eb516ab6b5ac8c4484816892e0ffa113522b32dba4f06cfafecea6cf9b07400 Jan 29 16:56:42 crc kubenswrapper[4746]: I0129 16:56:42.103311 4746 generic.go:334] "Generic (PLEG): container finished" podID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerID="38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa" exitCode=143 Jan 29 16:56:42 crc kubenswrapper[4746]: I0129 16:56:42.103480 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79b795c958-xfcz9" event={"ID":"01b2d6a7-d21f-451d-97b5-b38aef1efccf","Type":"ContainerDied","Data":"38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa"} Jan 29 16:56:42 crc kubenswrapper[4746]: I0129 16:56:42.106887 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerStarted","Data":"5dadf115c38f48d1956360f46ff2a8c876bfc81da686065c8716e4dc2f58d14a"} Jan 29 16:56:42 crc kubenswrapper[4746]: E0129 16:56:42.691835 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:42 crc kubenswrapper[4746]: E0129 16:56:42.692427 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pspgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e80703aa-8645-4cdb-8c1b-5511ef93bc83): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:42 crc kubenswrapper[4746]: E0129 16:56:42.693667 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.117078 4746 generic.go:334] "Generic (PLEG): container finished" podID="5a81565e-25dc-4269-8e78-c953acef207b" containerID="1fcd0fc16e0dc4d896486171c419af575bafdec450638ee76d77646a35a6e962" exitCode=0 Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.117260 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ls92k" event={"ID":"5a81565e-25dc-4269-8e78-c953acef207b","Type":"ContainerDied","Data":"1fcd0fc16e0dc4d896486171c419af575bafdec450638ee76d77646a35a6e962"} Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.120748 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c4b578977-hfn59" event={"ID":"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6","Type":"ContainerStarted","Data":"8570c70a880e99072977cb4e1698d7dd3b7ba1f3aac7236951149c68e8cd523d"} Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.120801 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c4b578977-hfn59" event={"ID":"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6","Type":"ContainerStarted","Data":"ffe4f88f98c0c616c8a6607cb72e6acd7cdee0142ea8746e929924d4801cbfca"} Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.120812 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c4b578977-hfn59" event={"ID":"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6","Type":"ContainerStarted","Data":"8eb516ab6b5ac8c4484816892e0ffa113522b32dba4f06cfafecea6cf9b07400"} Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.120890 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.123233 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerStarted","Data":"668dd39faa1f7930d858bbf7165922ffc5f497d923c317f7332f5656ce122166"} Jan 29 16:56:43 crc kubenswrapper[4746]: E0129 16:56:43.125319 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" Jan 29 16:56:43 crc kubenswrapper[4746]: I0129 16:56:43.200536 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c4b578977-hfn59" podStartSLOduration=3.200514549 podStartE2EDuration="3.200514549s" podCreationTimestamp="2026-01-29 16:56:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:43.19396074 +0000 UTC m=+1325.594545384" watchObservedRunningTime="2026-01-29 16:56:43.200514549 +0000 UTC m=+1325.601099193" Jan 29 16:56:44 crc kubenswrapper[4746]: E0129 16:56:44.135361 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.513387 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ls92k" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658339 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a81565e-25dc-4269-8e78-c953acef207b-etc-machine-id\") pod \"5a81565e-25dc-4269-8e78-c953acef207b\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658408 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-config-data\") pod \"5a81565e-25dc-4269-8e78-c953acef207b\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658446 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-scripts\") pod \"5a81565e-25dc-4269-8e78-c953acef207b\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658471 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q57s2\" (UniqueName: \"kubernetes.io/projected/5a81565e-25dc-4269-8e78-c953acef207b-kube-api-access-q57s2\") pod \"5a81565e-25dc-4269-8e78-c953acef207b\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658474 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a81565e-25dc-4269-8e78-c953acef207b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5a81565e-25dc-4269-8e78-c953acef207b" (UID: "5a81565e-25dc-4269-8e78-c953acef207b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658494 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-combined-ca-bundle\") pod \"5a81565e-25dc-4269-8e78-c953acef207b\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.658631 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-db-sync-config-data\") pod \"5a81565e-25dc-4269-8e78-c953acef207b\" (UID: \"5a81565e-25dc-4269-8e78-c953acef207b\") " Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.659692 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a81565e-25dc-4269-8e78-c953acef207b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.664371 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a81565e-25dc-4269-8e78-c953acef207b-kube-api-access-q57s2" (OuterVolumeSpecName: "kube-api-access-q57s2") pod "5a81565e-25dc-4269-8e78-c953acef207b" (UID: "5a81565e-25dc-4269-8e78-c953acef207b"). InnerVolumeSpecName "kube-api-access-q57s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.664701 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-scripts" (OuterVolumeSpecName: "scripts") pod "5a81565e-25dc-4269-8e78-c953acef207b" (UID: "5a81565e-25dc-4269-8e78-c953acef207b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.681989 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5a81565e-25dc-4269-8e78-c953acef207b" (UID: "5a81565e-25dc-4269-8e78-c953acef207b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.684992 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a81565e-25dc-4269-8e78-c953acef207b" (UID: "5a81565e-25dc-4269-8e78-c953acef207b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.710095 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-config-data" (OuterVolumeSpecName: "config-data") pod "5a81565e-25dc-4269-8e78-c953acef207b" (UID: "5a81565e-25dc-4269-8e78-c953acef207b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.762153 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.762268 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.762290 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q57s2\" (UniqueName: \"kubernetes.io/projected/5a81565e-25dc-4269-8e78-c953acef207b-kube-api-access-q57s2\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.762308 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:44 crc kubenswrapper[4746]: I0129 16:56:44.762322 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a81565e-25dc-4269-8e78-c953acef207b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.154595 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ls92k" event={"ID":"5a81565e-25dc-4269-8e78-c953acef207b","Type":"ContainerDied","Data":"ab1458a5d2a1c7420a9492c3696225a9fddc000e30822caf4eb45f9b8c1bdf6e"} Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.154956 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1458a5d2a1c7420a9492c3696225a9fddc000e30822caf4eb45f9b8c1bdf6e" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.154718 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ls92k" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.172866 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79b795c958-xfcz9" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:35540->10.217.0.162:9311: read: connection reset by peer" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.172866 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79b795c958-xfcz9" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:35536->10.217.0.162:9311: read: connection reset by peer" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.445016 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:45 crc kubenswrapper[4746]: E0129 16:56:45.445650 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a81565e-25dc-4269-8e78-c953acef207b" containerName="cinder-db-sync" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.445668 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a81565e-25dc-4269-8e78-c953acef207b" containerName="cinder-db-sync" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.445845 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a81565e-25dc-4269-8e78-c953acef207b" containerName="cinder-db-sync" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.447792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.452036 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.458040 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.458577 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ct2ps" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.458699 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.458834 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.559775 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-9qwdx"] Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.560031 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerName="dnsmasq-dns" containerID="cri-o://709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7" gracePeriod=10 Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.561546 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.577285 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.577402 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnf2\" (UniqueName: \"kubernetes.io/projected/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-kube-api-access-nwnf2\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.577470 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.577523 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.577568 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-scripts\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.577608 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.628836 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-gch9n"] Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.630788 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.648372 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-gch9n"] Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.678701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.678770 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.678832 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnf2\" (UniqueName: \"kubernetes.io/projected/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-kube-api-access-nwnf2\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.678859 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.678890 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.678924 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-scripts\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.680933 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.688023 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.691605 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.692285 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.708921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-scripts\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.711556 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnf2\" (UniqueName: \"kubernetes.io/projected/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-kube-api-access-nwnf2\") pod \"cinder-scheduler-0\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.751644 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.756941 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.759713 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.779994 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.781350 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.781450 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.781487 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg24m\" (UniqueName: \"kubernetes.io/projected/6c220cf0-5a8e-40d6-8034-abd6fbe38228-kube-api-access-tg24m\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.781568 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.781595 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.781653 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-config\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.782663 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.885762 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.885824 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.885858 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-config\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.885906 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca34074f-df2b-493c-9e1a-16da3d52696e-logs\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.885937 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.885984 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-scripts\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886015 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886037 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886070 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886112 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886145 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6hk\" (UniqueName: \"kubernetes.io/projected/ca34074f-df2b-493c-9e1a-16da3d52696e-kube-api-access-xt6hk\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886163 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg24m\" (UniqueName: \"kubernetes.io/projected/6c220cf0-5a8e-40d6-8034-abd6fbe38228-kube-api-access-tg24m\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886204 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca34074f-df2b-493c-9e1a-16da3d52696e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886840 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.886856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.888007 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-config\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.888296 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.888308 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.903083 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.903414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg24m\" (UniqueName: \"kubernetes.io/projected/6c220cf0-5a8e-40d6-8034-abd6fbe38228-kube-api-access-tg24m\") pod \"dnsmasq-dns-75bfc9b94f-gch9n\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.950613 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989111 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data-custom\") pod \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989156 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data\") pod \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989308 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49gkb\" (UniqueName: \"kubernetes.io/projected/01b2d6a7-d21f-451d-97b5-b38aef1efccf-kube-api-access-49gkb\") pod \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989385 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b2d6a7-d21f-451d-97b5-b38aef1efccf-logs\") pod \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989477 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-combined-ca-bundle\") pod \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\" (UID: \"01b2d6a7-d21f-451d-97b5-b38aef1efccf\") " Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989731 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca34074f-df2b-493c-9e1a-16da3d52696e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989825 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca34074f-df2b-493c-9e1a-16da3d52696e-logs\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989856 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989882 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-scripts\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989909 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989923 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.989961 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6hk\" (UniqueName: \"kubernetes.io/projected/ca34074f-df2b-493c-9e1a-16da3d52696e-kube-api-access-xt6hk\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.990018 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca34074f-df2b-493c-9e1a-16da3d52696e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.990929 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca34074f-df2b-493c-9e1a-16da3d52696e-logs\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:45 crc kubenswrapper[4746]: I0129 16:56:45.991784 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b2d6a7-d21f-451d-97b5-b38aef1efccf-logs" (OuterVolumeSpecName: "logs") pod "01b2d6a7-d21f-451d-97b5-b38aef1efccf" (UID: "01b2d6a7-d21f-451d-97b5-b38aef1efccf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.004245 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-scripts\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.006698 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.007115 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b2d6a7-d21f-451d-97b5-b38aef1efccf-kube-api-access-49gkb" (OuterVolumeSpecName: "kube-api-access-49gkb") pod "01b2d6a7-d21f-451d-97b5-b38aef1efccf" (UID: "01b2d6a7-d21f-451d-97b5-b38aef1efccf"). InnerVolumeSpecName "kube-api-access-49gkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.007923 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.009275 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "01b2d6a7-d21f-451d-97b5-b38aef1efccf" (UID: "01b2d6a7-d21f-451d-97b5-b38aef1efccf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.022791 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.028446 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6hk\" (UniqueName: \"kubernetes.io/projected/ca34074f-df2b-493c-9e1a-16da3d52696e-kube-api-access-xt6hk\") pod \"cinder-api-0\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " pod="openstack/cinder-api-0" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.043457 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01b2d6a7-d21f-451d-97b5-b38aef1efccf" (UID: "01b2d6a7-d21f-451d-97b5-b38aef1efccf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.088713 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data" (OuterVolumeSpecName: "config-data") pod "01b2d6a7-d21f-451d-97b5-b38aef1efccf" (UID: "01b2d6a7-d21f-451d-97b5-b38aef1efccf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.091984 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49gkb\" (UniqueName: \"kubernetes.io/projected/01b2d6a7-d21f-451d-97b5-b38aef1efccf-kube-api-access-49gkb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.092016 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01b2d6a7-d21f-451d-97b5-b38aef1efccf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.092027 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.092038 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.092082 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b2d6a7-d21f-451d-97b5-b38aef1efccf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.135387 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.199755 4746 generic.go:334] "Generic (PLEG): container finished" podID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerID="908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f" exitCode=0 Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.199814 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79b795c958-xfcz9" event={"ID":"01b2d6a7-d21f-451d-97b5-b38aef1efccf","Type":"ContainerDied","Data":"908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f"} Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.199840 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79b795c958-xfcz9" event={"ID":"01b2d6a7-d21f-451d-97b5-b38aef1efccf","Type":"ContainerDied","Data":"2f4b26fd9af224fa9786f5b3496dc9b27714b0a294aa5a2b64b6250f8433259a"} Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.199855 4746 scope.go:117] "RemoveContainer" containerID="908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.199992 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79b795c958-xfcz9" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.211514 4746 generic.go:334] "Generic (PLEG): container finished" podID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerID="709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7" exitCode=0 Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.211550 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" event={"ID":"e698edd5-b718-4b17-bb7c-eccfb6d23d5e","Type":"ContainerDied","Data":"709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7"} Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.211575 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" event={"ID":"e698edd5-b718-4b17-bb7c-eccfb6d23d5e","Type":"ContainerDied","Data":"a8aba5e57d08674853625d2072341fb2c993bbf4c02d295289b08c61ee3782a3"} Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.211634 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-9qwdx" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.248943 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79b795c958-xfcz9"] Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.258021 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-79b795c958-xfcz9"] Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.260348 4746 scope.go:117] "RemoveContainer" containerID="38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.268723 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.282410 4746 scope.go:117] "RemoveContainer" containerID="908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f" Jan 29 16:56:46 crc kubenswrapper[4746]: E0129 16:56:46.282811 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f\": container with ID starting with 908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f not found: ID does not exist" containerID="908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.282853 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f"} err="failed to get container status \"908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f\": rpc error: code = NotFound desc = could not find container \"908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f\": container with ID starting with 908867c46798043ccc4b3b28baf231ee004c0f4f2fb9ebf6b68f258558314d2f not found: ID does not exist" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.282878 4746 scope.go:117] "RemoveContainer" containerID="38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa" Jan 29 16:56:46 crc kubenswrapper[4746]: E0129 16:56:46.283142 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa\": container with ID starting with 38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa not found: ID does not exist" containerID="38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.283174 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa"} err="failed to get container status \"38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa\": rpc error: code = NotFound desc = could not find container \"38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa\": container with ID starting with 38aa3ef502850f1e507716d03ca7d3f7ade3f612390d580ca3e88cf1043093fa not found: ID does not exist" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.283207 4746 scope.go:117] "RemoveContainer" containerID="709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.299097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx5r9\" (UniqueName: \"kubernetes.io/projected/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-kube-api-access-mx5r9\") pod \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.299213 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-nb\") pod \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.299243 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-svc\") pod \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.299298 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-swift-storage-0\") pod \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.299351 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-sb\") pod \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.299370 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-config\") pod \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\" (UID: \"e698edd5-b718-4b17-bb7c-eccfb6d23d5e\") " Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.307370 4746 scope.go:117] "RemoveContainer" containerID="5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.316115 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-kube-api-access-mx5r9" (OuterVolumeSpecName: "kube-api-access-mx5r9") pod "e698edd5-b718-4b17-bb7c-eccfb6d23d5e" (UID: "e698edd5-b718-4b17-bb7c-eccfb6d23d5e"). InnerVolumeSpecName "kube-api-access-mx5r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.339542 4746 scope.go:117] "RemoveContainer" containerID="709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7" Jan 29 16:56:46 crc kubenswrapper[4746]: E0129 16:56:46.339961 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7\": container with ID starting with 709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7 not found: ID does not exist" containerID="709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.340014 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7"} err="failed to get container status \"709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7\": rpc error: code = NotFound desc = could not find container \"709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7\": container with ID starting with 709f08fdc6b16119ea53ae43d719ff9cbceba454aae99746cc1d3367d81d67e7 not found: ID does not exist" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.340080 4746 scope.go:117] "RemoveContainer" containerID="5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f" Jan 29 16:56:46 crc kubenswrapper[4746]: E0129 16:56:46.340868 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f\": container with ID starting with 5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f not found: ID does not exist" containerID="5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.340913 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f"} err="failed to get container status \"5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f\": rpc error: code = NotFound desc = could not find container \"5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f\": container with ID starting with 5707491920b476c3da9b272744bf9b2abb2dc3ead6f874dd75dff4ffeb0a958f not found: ID does not exist" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.373681 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-config" (OuterVolumeSpecName: "config") pod "e698edd5-b718-4b17-bb7c-eccfb6d23d5e" (UID: "e698edd5-b718-4b17-bb7c-eccfb6d23d5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.385034 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e698edd5-b718-4b17-bb7c-eccfb6d23d5e" (UID: "e698edd5-b718-4b17-bb7c-eccfb6d23d5e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.395896 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e698edd5-b718-4b17-bb7c-eccfb6d23d5e" (UID: "e698edd5-b718-4b17-bb7c-eccfb6d23d5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.407931 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.408282 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.408363 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.408433 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx5r9\" (UniqueName: \"kubernetes.io/projected/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-kube-api-access-mx5r9\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.408669 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e698edd5-b718-4b17-bb7c-eccfb6d23d5e" (UID: "e698edd5-b718-4b17-bb7c-eccfb6d23d5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.413704 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e698edd5-b718-4b17-bb7c-eccfb6d23d5e" (UID: "e698edd5-b718-4b17-bb7c-eccfb6d23d5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.420889 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.464175 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" path="/var/lib/kubelet/pods/01b2d6a7-d21f-451d-97b5-b38aef1efccf/volumes" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.511306 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.511338 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e698edd5-b718-4b17-bb7c-eccfb6d23d5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.534012 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-9qwdx"] Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.546658 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-9qwdx"] Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.598766 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-gch9n"] Jan 29 16:56:46 crc kubenswrapper[4746]: W0129 16:56:46.612695 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c220cf0_5a8e_40d6_8034_abd6fbe38228.slice/crio-5b054f9cafc8a6dd0d240724fa5447c27104345e5746c5666494e2a2621ba9ca WatchSource:0}: Error finding container 5b054f9cafc8a6dd0d240724fa5447c27104345e5746c5666494e2a2621ba9ca: Status 404 returned error can't find the container with id 5b054f9cafc8a6dd0d240724fa5447c27104345e5746c5666494e2a2621ba9ca Jan 29 16:56:46 crc kubenswrapper[4746]: I0129 16:56:46.807328 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:46 crc kubenswrapper[4746]: W0129 16:56:46.831813 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca34074f_df2b_493c_9e1a_16da3d52696e.slice/crio-cdf4f0b707ae0d8f9dcfbda103c5293389ee538fa13a372ea84a459096db355d WatchSource:0}: Error finding container cdf4f0b707ae0d8f9dcfbda103c5293389ee538fa13a372ea84a459096db355d: Status 404 returned error can't find the container with id cdf4f0b707ae0d8f9dcfbda103c5293389ee538fa13a372ea84a459096db355d Jan 29 16:56:47 crc kubenswrapper[4746]: I0129 16:56:47.231813 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerID="e11c46187c0c5efa4520c8c5ce645af356b307d4ec5100ee3f05955f5731aa9e" exitCode=0 Jan 29 16:56:47 crc kubenswrapper[4746]: I0129 16:56:47.232200 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" event={"ID":"6c220cf0-5a8e-40d6-8034-abd6fbe38228","Type":"ContainerDied","Data":"e11c46187c0c5efa4520c8c5ce645af356b307d4ec5100ee3f05955f5731aa9e"} Jan 29 16:56:47 crc kubenswrapper[4746]: I0129 16:56:47.232224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" event={"ID":"6c220cf0-5a8e-40d6-8034-abd6fbe38228","Type":"ContainerStarted","Data":"5b054f9cafc8a6dd0d240724fa5447c27104345e5746c5666494e2a2621ba9ca"} Jan 29 16:56:47 crc kubenswrapper[4746]: I0129 16:56:47.238542 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ca34074f-df2b-493c-9e1a-16da3d52696e","Type":"ContainerStarted","Data":"cdf4f0b707ae0d8f9dcfbda103c5293389ee538fa13a372ea84a459096db355d"} Jan 29 16:56:47 crc kubenswrapper[4746]: I0129 16:56:47.278325 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ceb61981-0fc6-401b-bf1d-16f7ba2a3753","Type":"ContainerStarted","Data":"a8f49c750b334d268206c711e9f2f2f653dc14a30bbeaee023ce852d6465ecda"} Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.209870 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.292435 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ceb61981-0fc6-401b-bf1d-16f7ba2a3753","Type":"ContainerStarted","Data":"ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731"} Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.294387 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" event={"ID":"6c220cf0-5a8e-40d6-8034-abd6fbe38228","Type":"ContainerStarted","Data":"9245156fcc29c2389db6bcbc8dc65879e7625c94c760439657b66307d38664d4"} Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.294496 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.296358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ca34074f-df2b-493c-9e1a-16da3d52696e","Type":"ContainerStarted","Data":"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2"} Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.318442 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" podStartSLOduration=3.318412665 podStartE2EDuration="3.318412665s" podCreationTimestamp="2026-01-29 16:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:48.311238968 +0000 UTC m=+1330.711823612" watchObservedRunningTime="2026-01-29 16:56:48.318412665 +0000 UTC m=+1330.718997309" Jan 29 16:56:48 crc kubenswrapper[4746]: I0129 16:56:48.491492 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" path="/var/lib/kubelet/pods/e698edd5-b718-4b17-bb7c-eccfb6d23d5e/volumes" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.064828 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.065245 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.065482 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.066529 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1bf44a70454193334b73bbbaa8e59d7b095d5f8d7c6a3569af1049d7583b251"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.066601 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://c1bf44a70454193334b73bbbaa8e59d7b095d5f8d7c6a3569af1049d7583b251" gracePeriod=600 Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.306409 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="c1bf44a70454193334b73bbbaa8e59d7b095d5f8d7c6a3569af1049d7583b251" exitCode=0 Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.306489 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"c1bf44a70454193334b73bbbaa8e59d7b095d5f8d7c6a3569af1049d7583b251"} Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.306526 4746 scope.go:117] "RemoveContainer" containerID="f497afed52a8e95c6830b33adef89933088f61ef0f396f26bc62e5bc61330609" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.310992 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ceb61981-0fc6-401b-bf1d-16f7ba2a3753","Type":"ContainerStarted","Data":"5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62"} Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.314146 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ca34074f-df2b-493c-9e1a-16da3d52696e","Type":"ContainerStarted","Data":"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546"} Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.314167 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api-log" containerID="cri-o://e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2" gracePeriod=30 Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.314322 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api" containerID="cri-o://808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546" gracePeriod=30 Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.314493 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.336355 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.634919236 podStartE2EDuration="4.336331786s" podCreationTimestamp="2026-01-29 16:56:45 +0000 UTC" firstStartedPulling="2026-01-29 16:56:46.423623405 +0000 UTC m=+1328.824208049" lastFinishedPulling="2026-01-29 16:56:47.125035955 +0000 UTC m=+1329.525620599" observedRunningTime="2026-01-29 16:56:49.329610293 +0000 UTC m=+1331.730194937" watchObservedRunningTime="2026-01-29 16:56:49.336331786 +0000 UTC m=+1331.736916430" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.352958 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.352933561 podStartE2EDuration="4.352933561s" podCreationTimestamp="2026-01-29 16:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:49.346719341 +0000 UTC m=+1331.747304005" watchObservedRunningTime="2026-01-29 16:56:49.352933561 +0000 UTC m=+1331.753518215" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.864026 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.976747 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-scripts\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.976847 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt6hk\" (UniqueName: \"kubernetes.io/projected/ca34074f-df2b-493c-9e1a-16da3d52696e-kube-api-access-xt6hk\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.976889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.976930 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca34074f-df2b-493c-9e1a-16da3d52696e-logs\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.976954 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data-custom\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.976976 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca34074f-df2b-493c-9e1a-16da3d52696e-etc-machine-id\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.977032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-combined-ca-bundle\") pod \"ca34074f-df2b-493c-9e1a-16da3d52696e\" (UID: \"ca34074f-df2b-493c-9e1a-16da3d52696e\") " Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.977104 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca34074f-df2b-493c-9e1a-16da3d52696e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.977363 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca34074f-df2b-493c-9e1a-16da3d52696e-logs" (OuterVolumeSpecName: "logs") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:49 crc kubenswrapper[4746]: I0129 16:56:49.977385 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ca34074f-df2b-493c-9e1a-16da3d52696e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.188617 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.188741 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca34074f-df2b-493c-9e1a-16da3d52696e-kube-api-access-xt6hk" (OuterVolumeSpecName: "kube-api-access-xt6hk") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "kube-api-access-xt6hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.190540 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt6hk\" (UniqueName: \"kubernetes.io/projected/ca34074f-df2b-493c-9e1a-16da3d52696e-kube-api-access-xt6hk\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.190567 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca34074f-df2b-493c-9e1a-16da3d52696e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.190580 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.190892 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-scripts" (OuterVolumeSpecName: "scripts") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.193541 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.219400 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data" (OuterVolumeSpecName: "config-data") pod "ca34074f-df2b-493c-9e1a-16da3d52696e" (UID: "ca34074f-df2b-493c-9e1a-16da3d52696e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.292456 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.292486 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.292496 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca34074f-df2b-493c-9e1a-16da3d52696e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.325627 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156"} Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.327944 4746 generic.go:334] "Generic (PLEG): container finished" podID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerID="808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546" exitCode=0 Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.327994 4746 generic.go:334] "Generic (PLEG): container finished" podID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerID="e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2" exitCode=143 Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.328014 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.328023 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ca34074f-df2b-493c-9e1a-16da3d52696e","Type":"ContainerDied","Data":"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546"} Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.328064 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ca34074f-df2b-493c-9e1a-16da3d52696e","Type":"ContainerDied","Data":"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2"} Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.328079 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ca34074f-df2b-493c-9e1a-16da3d52696e","Type":"ContainerDied","Data":"cdf4f0b707ae0d8f9dcfbda103c5293389ee538fa13a372ea84a459096db355d"} Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.328101 4746 scope.go:117] "RemoveContainer" containerID="808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.374392 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.378734 4746 scope.go:117] "RemoveContainer" containerID="e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.396119 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414008 4746 scope.go:117] "RemoveContainer" containerID="808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414107 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414521 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api-log" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414537 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api-log" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414556 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerName="init" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414562 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerName="init" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414575 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414581 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414601 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414607 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414616 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerName="dnsmasq-dns" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414622 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerName="dnsmasq-dns" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414639 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api-log" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414644 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api-log" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414798 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e698edd5-b718-4b17-bb7c-eccfb6d23d5e" containerName="dnsmasq-dns" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414812 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api-log" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414821 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b2d6a7-d21f-451d-97b5-b38aef1efccf" containerName="barbican-api" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414833 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api-log" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.414848 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" containerName="cinder-api" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.414931 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546\": container with ID starting with 808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546 not found: ID does not exist" containerID="808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.415004 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546"} err="failed to get container status \"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546\": rpc error: code = NotFound desc = could not find container \"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546\": container with ID starting with 808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546 not found: ID does not exist" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.415029 4746 scope.go:117] "RemoveContainer" containerID="e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.415974 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: E0129 16:56:50.417931 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2\": container with ID starting with e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2 not found: ID does not exist" containerID="e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.417976 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2"} err="failed to get container status \"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2\": rpc error: code = NotFound desc = could not find container \"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2\": container with ID starting with e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2 not found: ID does not exist" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.418015 4746 scope.go:117] "RemoveContainer" containerID="808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.418442 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546"} err="failed to get container status \"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546\": rpc error: code = NotFound desc = could not find container \"808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546\": container with ID starting with 808e970afd0f947c0885e0e07cfda71bb7117b7871c522953f9bd885aea80546 not found: ID does not exist" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.418478 4746 scope.go:117] "RemoveContainer" containerID="e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.418663 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.418699 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.418674 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.421760 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2"} err="failed to get container status \"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2\": rpc error: code = NotFound desc = could not find container \"e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2\": container with ID starting with e836e9708fd8b8ef2c81d35b9c7484e885cbfbf6f987a21673bb36e2aa62e8c2 not found: ID does not exist" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.423890 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.465558 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca34074f-df2b-493c-9e1a-16da3d52696e" path="/var/lib/kubelet/pods/ca34074f-df2b-493c-9e1a-16da3d52696e/volumes" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-logs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498617 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-scripts\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv94l\" (UniqueName: \"kubernetes.io/projected/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-kube-api-access-tv94l\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498800 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498921 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data-custom\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.498981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.499026 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.499261 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.601719 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.601865 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data-custom\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.601899 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.602062 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.602222 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.602283 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.602354 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-logs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.602403 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-scripts\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.602685 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv94l\" (UniqueName: \"kubernetes.io/projected/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-kube-api-access-tv94l\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.606367 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.608110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-logs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.611863 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data-custom\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.613065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.618889 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.623352 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.626674 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.626887 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-scripts\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.633750 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv94l\" (UniqueName: \"kubernetes.io/projected/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-kube-api-access-tv94l\") pod \"cinder-api-0\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.732053 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:50 crc kubenswrapper[4746]: I0129 16:56:50.780374 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 16:56:51 crc kubenswrapper[4746]: I0129 16:56:51.222940 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:51 crc kubenswrapper[4746]: W0129 16:56:51.223577 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf727c52_99b6_4ab8_9815_4ab8c2dd5050.slice/crio-802dddbc19775990d7e7c69801ae1105528a44dde62bf6d7beb4076db6d83716 WatchSource:0}: Error finding container 802dddbc19775990d7e7c69801ae1105528a44dde62bf6d7beb4076db6d83716: Status 404 returned error can't find the container with id 802dddbc19775990d7e7c69801ae1105528a44dde62bf6d7beb4076db6d83716 Jan 29 16:56:51 crc kubenswrapper[4746]: I0129 16:56:51.358141 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf727c52-99b6-4ab8-9815-4ab8c2dd5050","Type":"ContainerStarted","Data":"802dddbc19775990d7e7c69801ae1105528a44dde62bf6d7beb4076db6d83716"} Jan 29 16:56:52 crc kubenswrapper[4746]: I0129 16:56:52.370070 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf727c52-99b6-4ab8-9815-4ab8c2dd5050","Type":"ContainerStarted","Data":"14a457ada9ded8a131b71b82b1d68aaab179b4e4656c9f30a8ff693e5705512c"} Jan 29 16:56:53 crc kubenswrapper[4746]: I0129 16:56:53.380867 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf727c52-99b6-4ab8-9815-4ab8c2dd5050","Type":"ContainerStarted","Data":"1a56f205c5cc1a3a3d1140e62eef702ab9a791c26bc2dc47a9b8cf3218933c17"} Jan 29 16:56:53 crc kubenswrapper[4746]: I0129 16:56:53.381320 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 16:56:53 crc kubenswrapper[4746]: I0129 16:56:53.403014 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.402989207 podStartE2EDuration="3.402989207s" podCreationTimestamp="2026-01-29 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:53.397876227 +0000 UTC m=+1335.798460871" watchObservedRunningTime="2026-01-29 16:56:53.402989207 +0000 UTC m=+1335.803573851" Jan 29 16:56:55 crc kubenswrapper[4746]: I0129 16:56:55.953387 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:56:55 crc kubenswrapper[4746]: I0129 16:56:55.987971 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.026874 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-5r4xd"] Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.027420 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerName="dnsmasq-dns" containerID="cri-o://336e557a58918bd33ef8f9cef0d616bcefaf87b5e58a4de4db8fae23e13dbe62" gracePeriod=10 Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.052855 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.421110 4746 generic.go:334] "Generic (PLEG): container finished" podID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerID="336e557a58918bd33ef8f9cef0d616bcefaf87b5e58a4de4db8fae23e13dbe62" exitCode=0 Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.421235 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" event={"ID":"cd947b0c-106a-45a0-95ae-5a7971e14e64","Type":"ContainerDied","Data":"336e557a58918bd33ef8f9cef0d616bcefaf87b5e58a4de4db8fae23e13dbe62"} Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.421546 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="cinder-scheduler" containerID="cri-o://ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731" gracePeriod=30 Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.421633 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="probe" containerID="cri-o://5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62" gracePeriod=30 Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.709797 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.821255 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-config\") pod \"cd947b0c-106a-45a0-95ae-5a7971e14e64\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.821350 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-svc\") pod \"cd947b0c-106a-45a0-95ae-5a7971e14e64\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.821509 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-sb\") pod \"cd947b0c-106a-45a0-95ae-5a7971e14e64\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.821535 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-swift-storage-0\") pod \"cd947b0c-106a-45a0-95ae-5a7971e14e64\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.821634 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpxh6\" (UniqueName: \"kubernetes.io/projected/cd947b0c-106a-45a0-95ae-5a7971e14e64-kube-api-access-tpxh6\") pod \"cd947b0c-106a-45a0-95ae-5a7971e14e64\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.821676 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-nb\") pod \"cd947b0c-106a-45a0-95ae-5a7971e14e64\" (UID: \"cd947b0c-106a-45a0-95ae-5a7971e14e64\") " Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.833010 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd947b0c-106a-45a0-95ae-5a7971e14e64-kube-api-access-tpxh6" (OuterVolumeSpecName: "kube-api-access-tpxh6") pod "cd947b0c-106a-45a0-95ae-5a7971e14e64" (UID: "cd947b0c-106a-45a0-95ae-5a7971e14e64"). InnerVolumeSpecName "kube-api-access-tpxh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.869853 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd947b0c-106a-45a0-95ae-5a7971e14e64" (UID: "cd947b0c-106a-45a0-95ae-5a7971e14e64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.872544 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd947b0c-106a-45a0-95ae-5a7971e14e64" (UID: "cd947b0c-106a-45a0-95ae-5a7971e14e64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.873856 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-config" (OuterVolumeSpecName: "config") pod "cd947b0c-106a-45a0-95ae-5a7971e14e64" (UID: "cd947b0c-106a-45a0-95ae-5a7971e14e64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.886057 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd947b0c-106a-45a0-95ae-5a7971e14e64" (UID: "cd947b0c-106a-45a0-95ae-5a7971e14e64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.890939 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd947b0c-106a-45a0-95ae-5a7971e14e64" (UID: "cd947b0c-106a-45a0-95ae-5a7971e14e64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.923781 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.924199 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.924313 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.924331 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.924344 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpxh6\" (UniqueName: \"kubernetes.io/projected/cd947b0c-106a-45a0-95ae-5a7971e14e64-kube-api-access-tpxh6\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:56 crc kubenswrapper[4746]: I0129 16:56:56.924357 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd947b0c-106a-45a0-95ae-5a7971e14e64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.433094 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.433092 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6554f656b5-5r4xd" event={"ID":"cd947b0c-106a-45a0-95ae-5a7971e14e64","Type":"ContainerDied","Data":"b9f4b28ffb679b20c4c1d2ac62b494bb0d714fb8d2fbf600bc3a62ac65046a1f"} Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.433552 4746 scope.go:117] "RemoveContainer" containerID="336e557a58918bd33ef8f9cef0d616bcefaf87b5e58a4de4db8fae23e13dbe62" Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.437935 4746 generic.go:334] "Generic (PLEG): container finished" podID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerID="5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62" exitCode=0 Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.438007 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ceb61981-0fc6-401b-bf1d-16f7ba2a3753","Type":"ContainerDied","Data":"5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62"} Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.467742 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-5r4xd"] Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.472823 4746 scope.go:117] "RemoveContainer" containerID="32689455729e0bb70dca5bed4975f76a72e5bf807ae82ea09c50c2a1d15bcc0b" Jan 29 16:56:57 crc kubenswrapper[4746]: I0129 16:56:57.475634 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6554f656b5-5r4xd"] Jan 29 16:56:58 crc kubenswrapper[4746]: I0129 16:56:58.462609 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" path="/var/lib/kubelet/pods/cd947b0c-106a-45a0-95ae-5a7971e14e64/volumes" Jan 29 16:56:58 crc kubenswrapper[4746]: I0129 16:56:58.463440 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 16:56:58 crc kubenswrapper[4746]: E0129 16:56:58.602257 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:58 crc kubenswrapper[4746]: E0129 16:56:58.602705 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pspgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e80703aa-8645-4cdb-8c1b-5511ef93bc83): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:58 crc kubenswrapper[4746]: E0129 16:56:58.603941 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.454976 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.469615 4746 generic.go:334] "Generic (PLEG): container finished" podID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerID="ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731" exitCode=0 Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.469657 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ceb61981-0fc6-401b-bf1d-16f7ba2a3753","Type":"ContainerDied","Data":"ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731"} Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.469690 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ceb61981-0fc6-401b-bf1d-16f7ba2a3753","Type":"ContainerDied","Data":"a8f49c750b334d268206c711e9f2f2f653dc14a30bbeaee023ce852d6465ecda"} Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.469712 4746 scope.go:117] "RemoveContainer" containerID="5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.470720 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.509444 4746 scope.go:117] "RemoveContainer" containerID="ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.538803 4746 scope.go:117] "RemoveContainer" containerID="5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62" Jan 29 16:56:59 crc kubenswrapper[4746]: E0129 16:56:59.539231 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62\": container with ID starting with 5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62 not found: ID does not exist" containerID="5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.539259 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62"} err="failed to get container status \"5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62\": rpc error: code = NotFound desc = could not find container \"5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62\": container with ID starting with 5503ff3d622f1ab3d9e88907bc7ec28575bfc7e819bc89e5fe7c7fc1b03d2d62 not found: ID does not exist" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.539280 4746 scope.go:117] "RemoveContainer" containerID="ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731" Jan 29 16:56:59 crc kubenswrapper[4746]: E0129 16:56:59.539571 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731\": container with ID starting with ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731 not found: ID does not exist" containerID="ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.539590 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731"} err="failed to get container status \"ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731\": rpc error: code = NotFound desc = could not find container \"ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731\": container with ID starting with ab070c6786b8f7eb9a5d1ef6d99142821002ad09ab011cfa73b11e4bf1c41731 not found: ID does not exist" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.573910 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnf2\" (UniqueName: \"kubernetes.io/projected/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-kube-api-access-nwnf2\") pod \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.573963 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-etc-machine-id\") pod \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.574060 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-scripts\") pod \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.574152 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ceb61981-0fc6-401b-bf1d-16f7ba2a3753" (UID: "ceb61981-0fc6-401b-bf1d-16f7ba2a3753"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.574159 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-combined-ca-bundle\") pod \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.574405 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data\") pod \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.574438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data-custom\") pod \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\" (UID: \"ceb61981-0fc6-401b-bf1d-16f7ba2a3753\") " Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.575421 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.579634 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ceb61981-0fc6-401b-bf1d-16f7ba2a3753" (UID: "ceb61981-0fc6-401b-bf1d-16f7ba2a3753"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.579767 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-kube-api-access-nwnf2" (OuterVolumeSpecName: "kube-api-access-nwnf2") pod "ceb61981-0fc6-401b-bf1d-16f7ba2a3753" (UID: "ceb61981-0fc6-401b-bf1d-16f7ba2a3753"). InnerVolumeSpecName "kube-api-access-nwnf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.580237 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-scripts" (OuterVolumeSpecName: "scripts") pod "ceb61981-0fc6-401b-bf1d-16f7ba2a3753" (UID: "ceb61981-0fc6-401b-bf1d-16f7ba2a3753"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.631853 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb61981-0fc6-401b-bf1d-16f7ba2a3753" (UID: "ceb61981-0fc6-401b-bf1d-16f7ba2a3753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.679458 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.679492 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.679508 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.679522 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnf2\" (UniqueName: \"kubernetes.io/projected/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-kube-api-access-nwnf2\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.687058 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data" (OuterVolumeSpecName: "config-data") pod "ceb61981-0fc6-401b-bf1d-16f7ba2a3753" (UID: "ceb61981-0fc6-401b-bf1d-16f7ba2a3753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.781489 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb61981-0fc6-401b-bf1d-16f7ba2a3753-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.811562 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.827291 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.838901 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:59 crc kubenswrapper[4746]: E0129 16:56:59.839388 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerName="dnsmasq-dns" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839414 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerName="dnsmasq-dns" Jan 29 16:56:59 crc kubenswrapper[4746]: E0129 16:56:59.839433 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerName="init" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839441 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerName="init" Jan 29 16:56:59 crc kubenswrapper[4746]: E0129 16:56:59.839469 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="probe" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839476 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="probe" Jan 29 16:56:59 crc kubenswrapper[4746]: E0129 16:56:59.839488 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="cinder-scheduler" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839494 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="cinder-scheduler" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839670 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="cinder-scheduler" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839683 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd947b0c-106a-45a0-95ae-5a7971e14e64" containerName="dnsmasq-dns" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.839691 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" containerName="probe" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.840837 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.843400 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.854036 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.985466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.985524 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.985693 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq8ck\" (UniqueName: \"kubernetes.io/projected/76935545-e8e3-4523-97b0-edce25c6756d-kube-api-access-fq8ck\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.985902 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76935545-e8e3-4523-97b0-edce25c6756d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.985965 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:59 crc kubenswrapper[4746]: I0129 16:56:59.986153 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-scripts\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.087700 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.088061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.088096 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq8ck\" (UniqueName: \"kubernetes.io/projected/76935545-e8e3-4523-97b0-edce25c6756d-kube-api-access-fq8ck\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.088135 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76935545-e8e3-4523-97b0-edce25c6756d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.088154 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.088210 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-scripts\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.088439 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76935545-e8e3-4523-97b0-edce25c6756d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.092567 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-scripts\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.092990 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.093126 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.100796 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.106314 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq8ck\" (UniqueName: \"kubernetes.io/projected/76935545-e8e3-4523-97b0-edce25c6756d-kube-api-access-fq8ck\") pod \"cinder-scheduler-0\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.174779 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.459297 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb61981-0fc6-401b-bf1d-16f7ba2a3753" path="/var/lib/kubelet/pods/ceb61981-0fc6-401b-bf1d-16f7ba2a3753/volumes" Jan 29 16:57:00 crc kubenswrapper[4746]: I0129 16:57:00.660902 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.297337 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.350920 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.423778 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fb46889d-2pzb6"] Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.424007 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fb46889d-2pzb6" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-log" containerID="cri-o://af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8" gracePeriod=30 Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.424402 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7fb46889d-2pzb6" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-api" containerID="cri-o://cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c" gracePeriod=30 Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.540725 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76935545-e8e3-4523-97b0-edce25c6756d","Type":"ContainerStarted","Data":"59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e"} Jan 29 16:57:01 crc kubenswrapper[4746]: I0129 16:57:01.541309 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76935545-e8e3-4523-97b0-edce25c6756d","Type":"ContainerStarted","Data":"024b77fbb808564631fc6d6ae314b6138dbf3d73713f5f228cc4ac270729222f"} Jan 29 16:57:02 crc kubenswrapper[4746]: I0129 16:57:02.551569 4746 generic.go:334] "Generic (PLEG): container finished" podID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerID="af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8" exitCode=143 Jan 29 16:57:02 crc kubenswrapper[4746]: I0129 16:57:02.551635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb46889d-2pzb6" event={"ID":"adb54f40-e963-4c6c-9a9c-03195655c57b","Type":"ContainerDied","Data":"af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8"} Jan 29 16:57:02 crc kubenswrapper[4746]: I0129 16:57:02.554119 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76935545-e8e3-4523-97b0-edce25c6756d","Type":"ContainerStarted","Data":"0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f"} Jan 29 16:57:02 crc kubenswrapper[4746]: I0129 16:57:02.581764 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.581744803 podStartE2EDuration="3.581744803s" podCreationTimestamp="2026-01-29 16:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:02.576126169 +0000 UTC m=+1344.976710823" watchObservedRunningTime="2026-01-29 16:57:02.581744803 +0000 UTC m=+1344.982329447" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.080325 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.227085 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7cd65c77b7-kbbjb"] Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.229564 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.235731 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.235944 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.236005 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.247623 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7cd65c77b7-kbbjb"] Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354397 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-etc-swift\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354447 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-internal-tls-certs\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354473 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-combined-ca-bundle\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354513 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk5t\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-kube-api-access-vgk5t\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354588 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-log-httpd\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354607 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-public-tls-certs\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354654 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-config-data\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.354672 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-run-httpd\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.406414 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.407860 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.409919 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fcnxr" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.409963 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.410681 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.416356 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457435 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgk5t\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-kube-api-access-vgk5t\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457522 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-log-httpd\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-public-tls-certs\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-config-data\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457609 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-run-httpd\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457675 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-etc-swift\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457691 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-internal-tls-certs\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.457710 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-combined-ca-bundle\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.458958 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-log-httpd\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.459038 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-run-httpd\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.465424 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-etc-swift\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.467070 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-combined-ca-bundle\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.467665 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-internal-tls-certs\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.467928 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-public-tls-certs\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.469544 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-config-data\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.480887 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgk5t\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-kube-api-access-vgk5t\") pod \"swift-proxy-7cd65c77b7-kbbjb\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.552581 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.559434 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.559580 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.559637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg9f4\" (UniqueName: \"kubernetes.io/projected/0185f119-b92f-4f05-9d0d-f0b2e081c331-kube-api-access-rg9f4\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.561139 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config-secret\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.662686 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg9f4\" (UniqueName: \"kubernetes.io/projected/0185f119-b92f-4f05-9d0d-f0b2e081c331-kube-api-access-rg9f4\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.662876 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config-secret\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.662934 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.663245 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.664275 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.668506 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.675450 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config-secret\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.687089 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg9f4\" (UniqueName: \"kubernetes.io/projected/0185f119-b92f-4f05-9d0d-f0b2e081c331-kube-api-access-rg9f4\") pod \"openstackclient\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " pod="openstack/openstackclient" Jan 29 16:57:03 crc kubenswrapper[4746]: I0129 16:57:03.724471 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.137224 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7cd65c77b7-kbbjb"] Jan 29 16:57:04 crc kubenswrapper[4746]: W0129 16:57:04.145364 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c0d701f_0ba6_4836_b3f9_1425b411d80d.slice/crio-8e1c8926e9dcf96c4c9de7e66da55db6276e763c8c2690d21c37f97e0a95ca24 WatchSource:0}: Error finding container 8e1c8926e9dcf96c4c9de7e66da55db6276e763c8c2690d21c37f97e0a95ca24: Status 404 returned error can't find the container with id 8e1c8926e9dcf96c4c9de7e66da55db6276e763c8c2690d21c37f97e0a95ca24 Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.210644 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:57:04 crc kubenswrapper[4746]: W0129 16:57:04.220168 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0185f119_b92f_4f05_9d0d_f0b2e081c331.slice/crio-65354fb8e3f3f7bf2b411897aba4929c8340af8f3c54fd0b8264e0564c0e3479 WatchSource:0}: Error finding container 65354fb8e3f3f7bf2b411897aba4929c8340af8f3c54fd0b8264e0564c0e3479: Status 404 returned error can't find the container with id 65354fb8e3f3f7bf2b411897aba4929c8340af8f3c54fd0b8264e0564c0e3479 Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.571264 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0185f119-b92f-4f05-9d0d-f0b2e081c331","Type":"ContainerStarted","Data":"65354fb8e3f3f7bf2b411897aba4929c8340af8f3c54fd0b8264e0564c0e3479"} Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.573773 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" event={"ID":"5c0d701f-0ba6-4836-b3f9-1425b411d80d","Type":"ContainerStarted","Data":"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09"} Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.573847 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" event={"ID":"5c0d701f-0ba6-4836-b3f9-1425b411d80d","Type":"ContainerStarted","Data":"8e1c8926e9dcf96c4c9de7e66da55db6276e763c8c2690d21c37f97e0a95ca24"} Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.658389 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.658902 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-central-agent" containerID="cri-o://8c9d994150da69a0f92e9938b86ea3e7f3e361c9b85f11d54c0c84e9d7a3e56f" gracePeriod=30 Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.660371 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="sg-core" containerID="cri-o://668dd39faa1f7930d858bbf7165922ffc5f497d923c317f7332f5656ce122166" gracePeriod=30 Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.660565 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-notification-agent" containerID="cri-o://5dadf115c38f48d1956360f46ff2a8c876bfc81da686065c8716e4dc2f58d14a" gracePeriod=30 Jan 29 16:57:04 crc kubenswrapper[4746]: I0129 16:57:04.982178 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.090832 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-internal-tls-certs\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.090935 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb54f40-e963-4c6c-9a9c-03195655c57b-logs\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.091642 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb54f40-e963-4c6c-9a9c-03195655c57b-logs" (OuterVolumeSpecName: "logs") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.091733 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-scripts\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.092285 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnpn\" (UniqueName: \"kubernetes.io/projected/adb54f40-e963-4c6c-9a9c-03195655c57b-kube-api-access-7dnpn\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.092355 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-config-data\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.092394 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-combined-ca-bundle\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.092473 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-public-tls-certs\") pod \"adb54f40-e963-4c6c-9a9c-03195655c57b\" (UID: \"adb54f40-e963-4c6c-9a9c-03195655c57b\") " Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.093509 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb54f40-e963-4c6c-9a9c-03195655c57b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.097552 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb54f40-e963-4c6c-9a9c-03195655c57b-kube-api-access-7dnpn" (OuterVolumeSpecName: "kube-api-access-7dnpn") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "kube-api-access-7dnpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.098594 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-scripts" (OuterVolumeSpecName: "scripts") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.161623 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-config-data" (OuterVolumeSpecName: "config-data") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.175880 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.176635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.196214 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.196249 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dnpn\" (UniqueName: \"kubernetes.io/projected/adb54f40-e963-4c6c-9a9c-03195655c57b-kube-api-access-7dnpn\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.196259 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.196269 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.208670 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.217070 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "adb54f40-e963-4c6c-9a9c-03195655c57b" (UID: "adb54f40-e963-4c6c-9a9c-03195655c57b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.299065 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.299093 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb54f40-e963-4c6c-9a9c-03195655c57b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.589794 4746 generic.go:334] "Generic (PLEG): container finished" podID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerID="cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c" exitCode=0 Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.589862 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb46889d-2pzb6" event={"ID":"adb54f40-e963-4c6c-9a9c-03195655c57b","Type":"ContainerDied","Data":"cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c"} Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.589896 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb46889d-2pzb6" event={"ID":"adb54f40-e963-4c6c-9a9c-03195655c57b","Type":"ContainerDied","Data":"ea3269840757664c2d1c7ae09a86661364f3ec5b25e30cb5472fb6c00c1b3c80"} Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.589914 4746 scope.go:117] "RemoveContainer" containerID="cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.589925 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb46889d-2pzb6" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.596633 4746 generic.go:334] "Generic (PLEG): container finished" podID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerID="668dd39faa1f7930d858bbf7165922ffc5f497d923c317f7332f5656ce122166" exitCode=2 Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.596665 4746 generic.go:334] "Generic (PLEG): container finished" podID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerID="8c9d994150da69a0f92e9938b86ea3e7f3e361c9b85f11d54c0c84e9d7a3e56f" exitCode=0 Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.596716 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerDied","Data":"668dd39faa1f7930d858bbf7165922ffc5f497d923c317f7332f5656ce122166"} Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.596738 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerDied","Data":"8c9d994150da69a0f92e9938b86ea3e7f3e361c9b85f11d54c0c84e9d7a3e56f"} Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.599101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" event={"ID":"5c0d701f-0ba6-4836-b3f9-1425b411d80d","Type":"ContainerStarted","Data":"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871"} Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.599598 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.599653 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.630557 4746 scope.go:117] "RemoveContainer" containerID="af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.632409 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" podStartSLOduration=2.632389254 podStartE2EDuration="2.632389254s" podCreationTimestamp="2026-01-29 16:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:05.621503666 +0000 UTC m=+1348.022088360" watchObservedRunningTime="2026-01-29 16:57:05.632389254 +0000 UTC m=+1348.032973898" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.646980 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7fb46889d-2pzb6"] Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.654833 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7fb46889d-2pzb6"] Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.656129 4746 scope.go:117] "RemoveContainer" containerID="cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c" Jan 29 16:57:05 crc kubenswrapper[4746]: E0129 16:57:05.657944 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c\": container with ID starting with cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c not found: ID does not exist" containerID="cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.658002 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c"} err="failed to get container status \"cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c\": rpc error: code = NotFound desc = could not find container \"cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c\": container with ID starting with cc9ef7edfdbaf84164f80c84fde166214111c9377f2544f3129429fa0c69ba5c not found: ID does not exist" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.658031 4746 scope.go:117] "RemoveContainer" containerID="af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8" Jan 29 16:57:05 crc kubenswrapper[4746]: E0129 16:57:05.658362 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8\": container with ID starting with af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8 not found: ID does not exist" containerID="af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8" Jan 29 16:57:05 crc kubenswrapper[4746]: I0129 16:57:05.658395 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8"} err="failed to get container status \"af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8\": rpc error: code = NotFound desc = could not find container \"af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8\": container with ID starting with af000f9ddc961870d224eadd54598d66ab39c7801dd5d8b067f598339e98c7d8 not found: ID does not exist" Jan 29 16:57:06 crc kubenswrapper[4746]: I0129 16:57:06.458244 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" path="/var/lib/kubelet/pods/adb54f40-e963-4c6c-9a9c-03195655c57b/volumes" Jan 29 16:57:08 crc kubenswrapper[4746]: I0129 16:57:08.708527 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:57:10 crc kubenswrapper[4746]: I0129 16:57:10.374686 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 16:57:10 crc kubenswrapper[4746]: I0129 16:57:10.659548 4746 generic.go:334] "Generic (PLEG): container finished" podID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerID="5dadf115c38f48d1956360f46ff2a8c876bfc81da686065c8716e4dc2f58d14a" exitCode=0 Jan 29 16:57:10 crc kubenswrapper[4746]: I0129 16:57:10.659609 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerDied","Data":"5dadf115c38f48d1956360f46ff2a8c876bfc81da686065c8716e4dc2f58d14a"} Jan 29 16:57:11 crc kubenswrapper[4746]: I0129 16:57:11.367912 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c4b578977-hfn59" Jan 29 16:57:11 crc kubenswrapper[4746]: I0129 16:57:11.425234 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58cbb9c4c4-qq49j"] Jan 29 16:57:11 crc kubenswrapper[4746]: I0129 16:57:11.425490 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58cbb9c4c4-qq49j" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-api" containerID="cri-o://3c2347442a4f50c53e94d41ce0e41388371a9772c4198a7789aa0ba9fa1b690c" gracePeriod=30 Jan 29 16:57:11 crc kubenswrapper[4746]: I0129 16:57:11.425949 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58cbb9c4c4-qq49j" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-httpd" containerID="cri-o://7a8ce438daed6fbd897f527086cd302365accd9de795981da2cea196144576a1" gracePeriod=30 Jan 29 16:57:11 crc kubenswrapper[4746]: I0129 16:57:11.672450 4746 generic.go:334] "Generic (PLEG): container finished" podID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerID="7a8ce438daed6fbd897f527086cd302365accd9de795981da2cea196144576a1" exitCode=0 Jan 29 16:57:11 crc kubenswrapper[4746]: I0129 16:57:11.672489 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbb9c4c4-qq49j" event={"ID":"7436ae82-3679-4ddd-bf25-ab3a104a8395","Type":"ContainerDied","Data":"7a8ce438daed6fbd897f527086cd302365accd9de795981da2cea196144576a1"} Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.562445 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.565743 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.802110 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.883676 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-run-httpd\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.884090 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.884164 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-sg-core-conf-yaml\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.884418 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-log-httpd\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.884682 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.884748 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-scripts\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.885464 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pspgj\" (UniqueName: \"kubernetes.io/projected/e80703aa-8645-4cdb-8c1b-5511ef93bc83-kube-api-access-pspgj\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.885519 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-config-data\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.885554 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-combined-ca-bundle\") pod \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\" (UID: \"e80703aa-8645-4cdb-8c1b-5511ef93bc83\") " Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.902341 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-scripts" (OuterVolumeSpecName: "scripts") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.905876 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80703aa-8645-4cdb-8c1b-5511ef93bc83-kube-api-access-pspgj" (OuterVolumeSpecName: "kube-api-access-pspgj") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "kube-api-access-pspgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.905964 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.906004 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e80703aa-8645-4cdb-8c1b-5511ef93bc83-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.934054 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.956405 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:13 crc kubenswrapper[4746]: I0129 16:57:13.967040 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-config-data" (OuterVolumeSpecName: "config-data") pod "e80703aa-8645-4cdb-8c1b-5511ef93bc83" (UID: "e80703aa-8645-4cdb-8c1b-5511ef93bc83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.007361 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pspgj\" (UniqueName: \"kubernetes.io/projected/e80703aa-8645-4cdb-8c1b-5511ef93bc83-kube-api-access-pspgj\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.007406 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.007422 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.007436 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.007449 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e80703aa-8645-4cdb-8c1b-5511ef93bc83-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.725886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0185f119-b92f-4f05-9d0d-f0b2e081c331","Type":"ContainerStarted","Data":"e4ad991870d64b906f98b966d5a79dd93b5367dac6510d8b9f9b6e56123d442e"} Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.728730 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e80703aa-8645-4cdb-8c1b-5511ef93bc83","Type":"ContainerDied","Data":"dae91903ae0f2a2e16bfd0e73bf20943a279fab4d531b26fe6f37deebc8b1262"} Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.728782 4746 scope.go:117] "RemoveContainer" containerID="668dd39faa1f7930d858bbf7165922ffc5f497d923c317f7332f5656ce122166" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.728824 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.749171 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.464563956 podStartE2EDuration="11.749151973s" podCreationTimestamp="2026-01-29 16:57:03 +0000 UTC" firstStartedPulling="2026-01-29 16:57:04.222731658 +0000 UTC m=+1346.623316302" lastFinishedPulling="2026-01-29 16:57:13.507319675 +0000 UTC m=+1355.907904319" observedRunningTime="2026-01-29 16:57:14.745372798 +0000 UTC m=+1357.145957442" watchObservedRunningTime="2026-01-29 16:57:14.749151973 +0000 UTC m=+1357.149736617" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.757769 4746 scope.go:117] "RemoveContainer" containerID="5dadf115c38f48d1956360f46ff2a8c876bfc81da686065c8716e4dc2f58d14a" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.810297 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.810688 4746 scope.go:117] "RemoveContainer" containerID="8c9d994150da69a0f92e9938b86ea3e7f3e361c9b85f11d54c0c84e9d7a3e56f" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.816803 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.837389 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:14 crc kubenswrapper[4746]: E0129 16:57:14.837983 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-central-agent" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838080 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-central-agent" Jan 29 16:57:14 crc kubenswrapper[4746]: E0129 16:57:14.838159 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="sg-core" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838266 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="sg-core" Jan 29 16:57:14 crc kubenswrapper[4746]: E0129 16:57:14.838329 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-log" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838429 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-log" Jan 29 16:57:14 crc kubenswrapper[4746]: E0129 16:57:14.838492 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-notification-agent" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838543 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-notification-agent" Jan 29 16:57:14 crc kubenswrapper[4746]: E0129 16:57:14.838600 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-api" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838655 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-api" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838875 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="sg-core" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.838943 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-api" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.839015 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb54f40-e963-4c6c-9a9c-03195655c57b" containerName="placement-log" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.839071 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-central-agent" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.839130 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" containerName="ceilometer-notification-agent" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.845522 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.845694 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.848553 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:14 crc kubenswrapper[4746]: I0129 16:57:14.850289 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.027537 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.027592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-config-data\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.027623 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csd56\" (UniqueName: \"kubernetes.io/projected/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-kube-api-access-csd56\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.027807 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-run-httpd\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.028046 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-scripts\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.028298 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-log-httpd\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.028369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.129863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csd56\" (UniqueName: \"kubernetes.io/projected/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-kube-api-access-csd56\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.129920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-run-httpd\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.129985 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-scripts\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.130050 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-log-httpd\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.130069 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.130108 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.130129 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-config-data\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.130494 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-run-httpd\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.130576 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-log-httpd\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.134631 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.140950 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.147080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-scripts\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.147936 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-config-data\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.161027 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csd56\" (UniqueName: \"kubernetes.io/projected/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-kube-api-access-csd56\") pod \"ceilometer-0\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.164833 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.632814 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:15 crc kubenswrapper[4746]: W0129 16:57:15.682637 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf69a2f8f_ab26_4e1a_abf9_8995d0f5f528.slice/crio-93b6625b88e881d7f1a3ae187d9a6fc5649ff30b505ad2e7968b91204d983910 WatchSource:0}: Error finding container 93b6625b88e881d7f1a3ae187d9a6fc5649ff30b505ad2e7968b91204d983910: Status 404 returned error can't find the container with id 93b6625b88e881d7f1a3ae187d9a6fc5649ff30b505ad2e7968b91204d983910 Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.759482 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerStarted","Data":"93b6625b88e881d7f1a3ae187d9a6fc5649ff30b505ad2e7968b91204d983910"} Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.763706 4746 generic.go:334] "Generic (PLEG): container finished" podID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerID="3c2347442a4f50c53e94d41ce0e41388371a9772c4198a7789aa0ba9fa1b690c" exitCode=0 Jan 29 16:57:15 crc kubenswrapper[4746]: I0129 16:57:15.763796 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbb9c4c4-qq49j" event={"ID":"7436ae82-3679-4ddd-bf25-ab3a104a8395","Type":"ContainerDied","Data":"3c2347442a4f50c53e94d41ce0e41388371a9772c4198a7789aa0ba9fa1b690c"} Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.120411 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.249033 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-ovndb-tls-certs\") pod \"7436ae82-3679-4ddd-bf25-ab3a104a8395\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.249361 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-httpd-config\") pod \"7436ae82-3679-4ddd-bf25-ab3a104a8395\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.249455 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-config\") pod \"7436ae82-3679-4ddd-bf25-ab3a104a8395\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.249522 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtl5c\" (UniqueName: \"kubernetes.io/projected/7436ae82-3679-4ddd-bf25-ab3a104a8395-kube-api-access-jtl5c\") pod \"7436ae82-3679-4ddd-bf25-ab3a104a8395\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.249575 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-combined-ca-bundle\") pod \"7436ae82-3679-4ddd-bf25-ab3a104a8395\" (UID: \"7436ae82-3679-4ddd-bf25-ab3a104a8395\") " Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.254694 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7436ae82-3679-4ddd-bf25-ab3a104a8395" (UID: "7436ae82-3679-4ddd-bf25-ab3a104a8395"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.254935 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7436ae82-3679-4ddd-bf25-ab3a104a8395-kube-api-access-jtl5c" (OuterVolumeSpecName: "kube-api-access-jtl5c") pod "7436ae82-3679-4ddd-bf25-ab3a104a8395" (UID: "7436ae82-3679-4ddd-bf25-ab3a104a8395"). InnerVolumeSpecName "kube-api-access-jtl5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.322033 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7436ae82-3679-4ddd-bf25-ab3a104a8395" (UID: "7436ae82-3679-4ddd-bf25-ab3a104a8395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.327125 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-config" (OuterVolumeSpecName: "config") pod "7436ae82-3679-4ddd-bf25-ab3a104a8395" (UID: "7436ae82-3679-4ddd-bf25-ab3a104a8395"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.351228 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtl5c\" (UniqueName: \"kubernetes.io/projected/7436ae82-3679-4ddd-bf25-ab3a104a8395-kube-api-access-jtl5c\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.351256 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.351265 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.351275 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.354016 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7436ae82-3679-4ddd-bf25-ab3a104a8395" (UID: "7436ae82-3679-4ddd-bf25-ab3a104a8395"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.452287 4746 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7436ae82-3679-4ddd-bf25-ab3a104a8395-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.457635 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80703aa-8645-4cdb-8c1b-5511ef93bc83" path="/var/lib/kubelet/pods/e80703aa-8645-4cdb-8c1b-5511ef93bc83/volumes" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.772369 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbb9c4c4-qq49j" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.772384 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbb9c4c4-qq49j" event={"ID":"7436ae82-3679-4ddd-bf25-ab3a104a8395","Type":"ContainerDied","Data":"88299231264c8328e373528f7da766ee3e2c1dda5aff04d209be144aeaf3fe42"} Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.772547 4746 scope.go:117] "RemoveContainer" containerID="7a8ce438daed6fbd897f527086cd302365accd9de795981da2cea196144576a1" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.774610 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerStarted","Data":"560631ede48f179add5249b5b213de34c163fb6f32ef2d9ac4337dae89bd3d11"} Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.798711 4746 scope.go:117] "RemoveContainer" containerID="3c2347442a4f50c53e94d41ce0e41388371a9772c4198a7789aa0ba9fa1b690c" Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.799060 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58cbb9c4c4-qq49j"] Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.808239 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58cbb9c4c4-qq49j"] Jan 29 16:57:16 crc kubenswrapper[4746]: I0129 16:57:16.991134 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:17 crc kubenswrapper[4746]: I0129 16:57:17.831318 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerStarted","Data":"7b44cd753ee2821260007229e85457f9ff311faf1d7e395129c09d6632465b15"} Jan 29 16:57:18 crc kubenswrapper[4746]: E0129 16:57:18.234258 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:57:18 crc kubenswrapper[4746]: E0129 16:57:18.234470 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csd56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f69a2f8f-ab26-4e1a-abf9-8995d0f5f528): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:18 crc kubenswrapper[4746]: E0129 16:57:18.235607 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" Jan 29 16:57:18 crc kubenswrapper[4746]: I0129 16:57:18.456664 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" path="/var/lib/kubelet/pods/7436ae82-3679-4ddd-bf25-ab3a104a8395/volumes" Jan 29 16:57:18 crc kubenswrapper[4746]: I0129 16:57:18.843268 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerStarted","Data":"1fc02b495295ba910b1cb4986c664cda1bbf8ffdf4a98c29cb49ed36df761951"} Jan 29 16:57:18 crc kubenswrapper[4746]: I0129 16:57:18.843632 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="sg-core" containerID="cri-o://1fc02b495295ba910b1cb4986c664cda1bbf8ffdf4a98c29cb49ed36df761951" gracePeriod=30 Jan 29 16:57:18 crc kubenswrapper[4746]: I0129 16:57:18.843641 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-notification-agent" containerID="cri-o://7b44cd753ee2821260007229e85457f9ff311faf1d7e395129c09d6632465b15" gracePeriod=30 Jan 29 16:57:18 crc kubenswrapper[4746]: I0129 16:57:18.843636 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-central-agent" containerID="cri-o://560631ede48f179add5249b5b213de34c163fb6f32ef2d9ac4337dae89bd3d11" gracePeriod=30 Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.855733 4746 generic.go:334] "Generic (PLEG): container finished" podID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerID="1fc02b495295ba910b1cb4986c664cda1bbf8ffdf4a98c29cb49ed36df761951" exitCode=2 Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.855771 4746 generic.go:334] "Generic (PLEG): container finished" podID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerID="7b44cd753ee2821260007229e85457f9ff311faf1d7e395129c09d6632465b15" exitCode=0 Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.855795 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerDied","Data":"1fc02b495295ba910b1cb4986c664cda1bbf8ffdf4a98c29cb49ed36df761951"} Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.855824 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerDied","Data":"7b44cd753ee2821260007229e85457f9ff311faf1d7e395129c09d6632465b15"} Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.947089 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-z7zjw"] Jan 29 16:57:19 crc kubenswrapper[4746]: E0129 16:57:19.947507 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-api" Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.947524 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-api" Jan 29 16:57:19 crc kubenswrapper[4746]: E0129 16:57:19.947545 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-httpd" Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.947553 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-httpd" Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.947727 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-httpd" Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.947745 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7436ae82-3679-4ddd-bf25-ab3a104a8395" containerName="neutron-api" Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.948491 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:19 crc kubenswrapper[4746]: I0129 16:57:19.962949 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-z7zjw"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.040819 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-t5tbp"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.044110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.053098 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t5tbp"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.077015 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ee86-account-create-update-wj5mf"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.078804 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.083375 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.086779 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ee86-account-create-update-wj5mf"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.128139 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5w84\" (UniqueName: \"kubernetes.io/projected/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-kube-api-access-h5w84\") pod \"nova-api-db-create-z7zjw\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.128288 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-operator-scripts\") pod \"nova-api-db-create-z7zjw\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.229740 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7m6m\" (UniqueName: \"kubernetes.io/projected/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-kube-api-access-d7m6m\") pod \"nova-cell0-db-create-t5tbp\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.230135 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-operator-scripts\") pod \"nova-api-db-create-z7zjw\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.231120 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-operator-scripts\") pod \"nova-api-db-create-z7zjw\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.231369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcjwh\" (UniqueName: \"kubernetes.io/projected/d7312900-d50f-4b7a-9b16-fb9487c1ad62-kube-api-access-dcjwh\") pod \"nova-api-ee86-account-create-update-wj5mf\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.231443 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7312900-d50f-4b7a-9b16-fb9487c1ad62-operator-scripts\") pod \"nova-api-ee86-account-create-update-wj5mf\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.232249 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5w84\" (UniqueName: \"kubernetes.io/projected/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-kube-api-access-h5w84\") pod \"nova-api-db-create-z7zjw\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.232386 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-operator-scripts\") pod \"nova-cell0-db-create-t5tbp\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.254921 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lmdqh"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.270331 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5w84\" (UniqueName: \"kubernetes.io/projected/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-kube-api-access-h5w84\") pod \"nova-api-db-create-z7zjw\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.278552 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lmdqh"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.278674 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.293454 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-628f-account-create-update-xfw92"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.294628 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.296702 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.306922 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-628f-account-create-update-xfw92"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.335780 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d51988e-2959-4dc6-af55-b3c40c2428ee-operator-scripts\") pod \"nova-cell1-db-create-lmdqh\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.335847 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcjwh\" (UniqueName: \"kubernetes.io/projected/d7312900-d50f-4b7a-9b16-fb9487c1ad62-kube-api-access-dcjwh\") pod \"nova-api-ee86-account-create-update-wj5mf\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.335873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7312900-d50f-4b7a-9b16-fb9487c1ad62-operator-scripts\") pod \"nova-api-ee86-account-create-update-wj5mf\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.335931 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgrbc\" (UniqueName: \"kubernetes.io/projected/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-kube-api-access-tgrbc\") pod \"nova-cell0-628f-account-create-update-xfw92\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.335960 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-operator-scripts\") pod \"nova-cell0-db-create-t5tbp\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.336012 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n745g\" (UniqueName: \"kubernetes.io/projected/4d51988e-2959-4dc6-af55-b3c40c2428ee-kube-api-access-n745g\") pod \"nova-cell1-db-create-lmdqh\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.336043 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-operator-scripts\") pod \"nova-cell0-628f-account-create-update-xfw92\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.336064 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7m6m\" (UniqueName: \"kubernetes.io/projected/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-kube-api-access-d7m6m\") pod \"nova-cell0-db-create-t5tbp\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.336755 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7312900-d50f-4b7a-9b16-fb9487c1ad62-operator-scripts\") pod \"nova-api-ee86-account-create-update-wj5mf\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.337089 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-operator-scripts\") pod \"nova-cell0-db-create-t5tbp\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.354089 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcjwh\" (UniqueName: \"kubernetes.io/projected/d7312900-d50f-4b7a-9b16-fb9487c1ad62-kube-api-access-dcjwh\") pod \"nova-api-ee86-account-create-update-wj5mf\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.354171 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7m6m\" (UniqueName: \"kubernetes.io/projected/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-kube-api-access-d7m6m\") pod \"nova-cell0-db-create-t5tbp\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.360505 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.399019 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.444441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n745g\" (UniqueName: \"kubernetes.io/projected/4d51988e-2959-4dc6-af55-b3c40c2428ee-kube-api-access-n745g\") pod \"nova-cell1-db-create-lmdqh\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.444537 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-operator-scripts\") pod \"nova-cell0-628f-account-create-update-xfw92\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.444726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d51988e-2959-4dc6-af55-b3c40c2428ee-operator-scripts\") pod \"nova-cell1-db-create-lmdqh\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.444916 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgrbc\" (UniqueName: \"kubernetes.io/projected/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-kube-api-access-tgrbc\") pod \"nova-cell0-628f-account-create-update-xfw92\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.446736 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-operator-scripts\") pod \"nova-cell0-628f-account-create-update-xfw92\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.447629 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d51988e-2959-4dc6-af55-b3c40c2428ee-operator-scripts\") pod \"nova-cell1-db-create-lmdqh\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.472106 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n745g\" (UniqueName: \"kubernetes.io/projected/4d51988e-2959-4dc6-af55-b3c40c2428ee-kube-api-access-n745g\") pod \"nova-cell1-db-create-lmdqh\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.475154 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgrbc\" (UniqueName: \"kubernetes.io/projected/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-kube-api-access-tgrbc\") pod \"nova-cell0-628f-account-create-update-xfw92\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.478459 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3ee3-account-create-update-wwrbf"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.480529 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.483244 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.512307 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3ee3-account-create-update-wwrbf"] Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.551912 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r276\" (UniqueName: \"kubernetes.io/projected/e8d29372-486b-4db2-8e2e-cd09059c9edc-kube-api-access-2r276\") pod \"nova-cell1-3ee3-account-create-update-wwrbf\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.551972 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d29372-486b-4db2-8e2e-cd09059c9edc-operator-scripts\") pod \"nova-cell1-3ee3-account-create-update-wwrbf\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.557082 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.568786 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.655490 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r276\" (UniqueName: \"kubernetes.io/projected/e8d29372-486b-4db2-8e2e-cd09059c9edc-kube-api-access-2r276\") pod \"nova-cell1-3ee3-account-create-update-wwrbf\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.655541 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d29372-486b-4db2-8e2e-cd09059c9edc-operator-scripts\") pod \"nova-cell1-3ee3-account-create-update-wwrbf\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.656721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d29372-486b-4db2-8e2e-cd09059c9edc-operator-scripts\") pod \"nova-cell1-3ee3-account-create-update-wwrbf\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.656776 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.680110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r276\" (UniqueName: \"kubernetes.io/projected/e8d29372-486b-4db2-8e2e-cd09059c9edc-kube-api-access-2r276\") pod \"nova-cell1-3ee3-account-create-update-wwrbf\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.884260 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:20 crc kubenswrapper[4746]: I0129 16:57:20.928275 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t5tbp"] Jan 29 16:57:20 crc kubenswrapper[4746]: W0129 16:57:20.934474 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f1f9808_a20d_4fdc_b2ea_586d3b917cc8.slice/crio-91f9fd266d020a496114bdb620d13b01edf6ac1e63fd52245e2ae404f5f3d48c WatchSource:0}: Error finding container 91f9fd266d020a496114bdb620d13b01edf6ac1e63fd52245e2ae404f5f3d48c: Status 404 returned error can't find the container with id 91f9fd266d020a496114bdb620d13b01edf6ac1e63fd52245e2ae404f5f3d48c Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.029290 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ee86-account-create-update-wj5mf"] Jan 29 16:57:21 crc kubenswrapper[4746]: W0129 16:57:21.045980 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7312900_d50f_4b7a_9b16_fb9487c1ad62.slice/crio-5bc1db21ea9f33655f913c83a982dcad5b3a5564bc828aaeeab8f7ddd51ac314 WatchSource:0}: Error finding container 5bc1db21ea9f33655f913c83a982dcad5b3a5564bc828aaeeab8f7ddd51ac314: Status 404 returned error can't find the container with id 5bc1db21ea9f33655f913c83a982dcad5b3a5564bc828aaeeab8f7ddd51ac314 Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.168646 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-z7zjw"] Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.254464 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-628f-account-create-update-xfw92"] Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.347803 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lmdqh"] Jan 29 16:57:21 crc kubenswrapper[4746]: W0129 16:57:21.363411 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d51988e_2959_4dc6_af55_b3c40c2428ee.slice/crio-210a98cf6e89a6223de964785e807bf8b947ef1c28df43589812937c3e89f0d6 WatchSource:0}: Error finding container 210a98cf6e89a6223de964785e807bf8b947ef1c28df43589812937c3e89f0d6: Status 404 returned error can't find the container with id 210a98cf6e89a6223de964785e807bf8b947ef1c28df43589812937c3e89f0d6 Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.455341 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3ee3-account-create-update-wwrbf"] Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.894880 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-628f-account-create-update-xfw92" event={"ID":"a3d3b8c6-5997-4881-8b92-b5244c49fd1c","Type":"ContainerStarted","Data":"1ecc4b9d71375f4912841c0ce64ae3b86930bc5b0cdf26d905fb011a69e81a3f"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.894971 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-628f-account-create-update-xfw92" event={"ID":"a3d3b8c6-5997-4881-8b92-b5244c49fd1c","Type":"ContainerStarted","Data":"f453378012e42e8b33325c87505ca469d7cd7502ae4908d67d642ccde7192d31"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.908511 4746 generic.go:334] "Generic (PLEG): container finished" podID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerID="560631ede48f179add5249b5b213de34c163fb6f32ef2d9ac4337dae89bd3d11" exitCode=0 Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.908668 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerDied","Data":"560631ede48f179add5249b5b213de34c163fb6f32ef2d9ac4337dae89bd3d11"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.910899 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" event={"ID":"e8d29372-486b-4db2-8e2e-cd09059c9edc","Type":"ContainerStarted","Data":"1c07233ce1d10220cf97e784147f808ac75d4b0881f9c9f6a83233ede2ff6a31"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.910972 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" event={"ID":"e8d29372-486b-4db2-8e2e-cd09059c9edc","Type":"ContainerStarted","Data":"f262d3171e08b25892b1f6244663956fb57903d52c9c68616e6c4808b5fb3291"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.912593 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lmdqh" event={"ID":"4d51988e-2959-4dc6-af55-b3c40c2428ee","Type":"ContainerStarted","Data":"2c68e474ae68ccd262214e30cfd3e1d88e25431121c450bb429bf23bb47d050a"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.912642 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lmdqh" event={"ID":"4d51988e-2959-4dc6-af55-b3c40c2428ee","Type":"ContainerStarted","Data":"210a98cf6e89a6223de964785e807bf8b947ef1c28df43589812937c3e89f0d6"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.916945 4746 generic.go:334] "Generic (PLEG): container finished" podID="d7312900-d50f-4b7a-9b16-fb9487c1ad62" containerID="021c90f39cc987692e39d8960c72a480e84d52e1479dba6e30fa872f71b14e33" exitCode=0 Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.917094 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ee86-account-create-update-wj5mf" event={"ID":"d7312900-d50f-4b7a-9b16-fb9487c1ad62","Type":"ContainerDied","Data":"021c90f39cc987692e39d8960c72a480e84d52e1479dba6e30fa872f71b14e33"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.917137 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ee86-account-create-update-wj5mf" event={"ID":"d7312900-d50f-4b7a-9b16-fb9487c1ad62","Type":"ContainerStarted","Data":"5bc1db21ea9f33655f913c83a982dcad5b3a5564bc828aaeeab8f7ddd51ac314"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.919474 4746 generic.go:334] "Generic (PLEG): container finished" podID="0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" containerID="676b786471a6f2475f210fe837e5982651b21b491df519ba3459ee1c6a079bf1" exitCode=0 Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.919583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t5tbp" event={"ID":"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8","Type":"ContainerDied","Data":"676b786471a6f2475f210fe837e5982651b21b491df519ba3459ee1c6a079bf1"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.919611 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t5tbp" event={"ID":"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8","Type":"ContainerStarted","Data":"91f9fd266d020a496114bdb620d13b01edf6ac1e63fd52245e2ae404f5f3d48c"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.922014 4746 generic.go:334] "Generic (PLEG): container finished" podID="8319ac14-e61e-4c9f-b22a-b2f08d0a6723" containerID="b9ad39947cac608b67c1042a6d2058a56f2f61b58c5c87e8da33d420616856ec" exitCode=0 Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.922099 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z7zjw" event={"ID":"8319ac14-e61e-4c9f-b22a-b2f08d0a6723","Type":"ContainerDied","Data":"b9ad39947cac608b67c1042a6d2058a56f2f61b58c5c87e8da33d420616856ec"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.922131 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z7zjw" event={"ID":"8319ac14-e61e-4c9f-b22a-b2f08d0a6723","Type":"ContainerStarted","Data":"53382b497cd36a18a9e7d621e9638700c053600e2580effc765a24245bf895c8"} Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.947732 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-628f-account-create-update-xfw92" podStartSLOduration=1.94770903 podStartE2EDuration="1.94770903s" podCreationTimestamp="2026-01-29 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:21.916655699 +0000 UTC m=+1364.317240363" watchObservedRunningTime="2026-01-29 16:57:21.94770903 +0000 UTC m=+1364.348293684" Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.960095 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-lmdqh" podStartSLOduration=1.9600725190000001 podStartE2EDuration="1.960072519s" podCreationTimestamp="2026-01-29 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:21.930449207 +0000 UTC m=+1364.331033851" watchObservedRunningTime="2026-01-29 16:57:21.960072519 +0000 UTC m=+1364.360657163" Jan 29 16:57:21 crc kubenswrapper[4746]: I0129 16:57:21.983265 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" podStartSLOduration=1.9832450640000001 podStartE2EDuration="1.983245064s" podCreationTimestamp="2026-01-29 16:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:21.946551888 +0000 UTC m=+1364.347136562" watchObservedRunningTime="2026-01-29 16:57:21.983245064 +0000 UTC m=+1364.383829708" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.017239 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043516 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-log-httpd\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043577 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-scripts\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043661 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-config-data\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043778 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csd56\" (UniqueName: \"kubernetes.io/projected/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-kube-api-access-csd56\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043801 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-sg-core-conf-yaml\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-run-httpd\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.043851 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-combined-ca-bundle\") pod \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\" (UID: \"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528\") " Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.045195 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.047816 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.051302 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-scripts" (OuterVolumeSpecName: "scripts") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.051619 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-kube-api-access-csd56" (OuterVolumeSpecName: "kube-api-access-csd56") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "kube-api-access-csd56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.075072 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.093029 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.100495 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-config-data" (OuterVolumeSpecName: "config-data") pod "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" (UID: "f69a2f8f-ab26-4e1a-abf9-8995d0f5f528"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146308 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146352 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146371 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146383 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csd56\" (UniqueName: \"kubernetes.io/projected/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-kube-api-access-csd56\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146399 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146410 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.146418 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.932379 4746 generic.go:334] "Generic (PLEG): container finished" podID="a3d3b8c6-5997-4881-8b92-b5244c49fd1c" containerID="1ecc4b9d71375f4912841c0ce64ae3b86930bc5b0cdf26d905fb011a69e81a3f" exitCode=0 Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.932425 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-628f-account-create-update-xfw92" event={"ID":"a3d3b8c6-5997-4881-8b92-b5244c49fd1c","Type":"ContainerDied","Data":"1ecc4b9d71375f4912841c0ce64ae3b86930bc5b0cdf26d905fb011a69e81a3f"} Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.937456 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f69a2f8f-ab26-4e1a-abf9-8995d0f5f528","Type":"ContainerDied","Data":"93b6625b88e881d7f1a3ae187d9a6fc5649ff30b505ad2e7968b91204d983910"} Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.937550 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.937804 4746 scope.go:117] "RemoveContainer" containerID="1fc02b495295ba910b1cb4986c664cda1bbf8ffdf4a98c29cb49ed36df761951" Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.940522 4746 generic.go:334] "Generic (PLEG): container finished" podID="e8d29372-486b-4db2-8e2e-cd09059c9edc" containerID="1c07233ce1d10220cf97e784147f808ac75d4b0881f9c9f6a83233ede2ff6a31" exitCode=0 Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.940609 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" event={"ID":"e8d29372-486b-4db2-8e2e-cd09059c9edc","Type":"ContainerDied","Data":"1c07233ce1d10220cf97e784147f808ac75d4b0881f9c9f6a83233ede2ff6a31"} Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.942326 4746 generic.go:334] "Generic (PLEG): container finished" podID="4d51988e-2959-4dc6-af55-b3c40c2428ee" containerID="2c68e474ae68ccd262214e30cfd3e1d88e25431121c450bb429bf23bb47d050a" exitCode=0 Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.942358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lmdqh" event={"ID":"4d51988e-2959-4dc6-af55-b3c40c2428ee","Type":"ContainerDied","Data":"2c68e474ae68ccd262214e30cfd3e1d88e25431121c450bb429bf23bb47d050a"} Jan 29 16:57:22 crc kubenswrapper[4746]: I0129 16:57:22.973823 4746 scope.go:117] "RemoveContainer" containerID="7b44cd753ee2821260007229e85457f9ff311faf1d7e395129c09d6632465b15" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.030111 4746 scope.go:117] "RemoveContainer" containerID="560631ede48f179add5249b5b213de34c163fb6f32ef2d9ac4337dae89bd3d11" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.041584 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.054963 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067050 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:23 crc kubenswrapper[4746]: E0129 16:57:23.067449 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-central-agent" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067467 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-central-agent" Jan 29 16:57:23 crc kubenswrapper[4746]: E0129 16:57:23.067490 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-notification-agent" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067496 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-notification-agent" Jan 29 16:57:23 crc kubenswrapper[4746]: E0129 16:57:23.067513 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="sg-core" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067520 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="sg-core" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067695 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-central-agent" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067712 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="sg-core" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.067734 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" containerName="ceilometer-notification-agent" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.069240 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.073761 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.074415 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.093644 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166025 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-scripts\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166127 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166170 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjjv\" (UniqueName: \"kubernetes.io/projected/cf6919a0-014b-4d03-bec1-54d64e5fdd85-kube-api-access-xtjjv\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166286 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166339 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-run-httpd\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.166404 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-log-httpd\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.267828 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-run-httpd\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.267905 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-log-httpd\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.267937 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-scripts\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.267974 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.267999 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjjv\" (UniqueName: \"kubernetes.io/projected/cf6919a0-014b-4d03-bec1-54d64e5fdd85-kube-api-access-xtjjv\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.268021 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.268065 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.269017 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-run-httpd\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.269334 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-log-httpd\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.274975 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.274978 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.276036 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.276205 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-scripts\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.289892 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjjv\" (UniqueName: \"kubernetes.io/projected/cf6919a0-014b-4d03-bec1-54d64e5fdd85-kube-api-access-xtjjv\") pod \"ceilometer-0\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.359616 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.393917 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.471114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5w84\" (UniqueName: \"kubernetes.io/projected/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-kube-api-access-h5w84\") pod \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.471579 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-operator-scripts\") pod \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\" (UID: \"8319ac14-e61e-4c9f-b22a-b2f08d0a6723\") " Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.472433 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8319ac14-e61e-4c9f-b22a-b2f08d0a6723" (UID: "8319ac14-e61e-4c9f-b22a-b2f08d0a6723"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.480995 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-kube-api-access-h5w84" (OuterVolumeSpecName: "kube-api-access-h5w84") pod "8319ac14-e61e-4c9f-b22a-b2f08d0a6723" (UID: "8319ac14-e61e-4c9f-b22a-b2f08d0a6723"). InnerVolumeSpecName "kube-api-access-h5w84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.530089 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.536870 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.574141 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.574169 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5w84\" (UniqueName: \"kubernetes.io/projected/8319ac14-e61e-4c9f-b22a-b2f08d0a6723-kube-api-access-h5w84\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.675630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-operator-scripts\") pod \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.675685 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7m6m\" (UniqueName: \"kubernetes.io/projected/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-kube-api-access-d7m6m\") pod \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\" (UID: \"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8\") " Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.675767 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcjwh\" (UniqueName: \"kubernetes.io/projected/d7312900-d50f-4b7a-9b16-fb9487c1ad62-kube-api-access-dcjwh\") pod \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.675800 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7312900-d50f-4b7a-9b16-fb9487c1ad62-operator-scripts\") pod \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\" (UID: \"d7312900-d50f-4b7a-9b16-fb9487c1ad62\") " Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.676312 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" (UID: "0f1f9808-a20d-4fdc-b2ea-586d3b917cc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.676677 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7312900-d50f-4b7a-9b16-fb9487c1ad62-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7312900-d50f-4b7a-9b16-fb9487c1ad62" (UID: "d7312900-d50f-4b7a-9b16-fb9487c1ad62"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.680407 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7312900-d50f-4b7a-9b16-fb9487c1ad62-kube-api-access-dcjwh" (OuterVolumeSpecName: "kube-api-access-dcjwh") pod "d7312900-d50f-4b7a-9b16-fb9487c1ad62" (UID: "d7312900-d50f-4b7a-9b16-fb9487c1ad62"). InnerVolumeSpecName "kube-api-access-dcjwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.680470 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-kube-api-access-d7m6m" (OuterVolumeSpecName: "kube-api-access-d7m6m") pod "0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" (UID: "0f1f9808-a20d-4fdc-b2ea-586d3b917cc8"). InnerVolumeSpecName "kube-api-access-d7m6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.777900 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.777932 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7m6m\" (UniqueName: \"kubernetes.io/projected/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8-kube-api-access-d7m6m\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.777942 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcjwh\" (UniqueName: \"kubernetes.io/projected/d7312900-d50f-4b7a-9b16-fb9487c1ad62-kube-api-access-dcjwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.777950 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7312900-d50f-4b7a-9b16-fb9487c1ad62-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.830561 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:23 crc kubenswrapper[4746]: W0129 16:57:23.835863 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf6919a0_014b_4d03_bec1_54d64e5fdd85.slice/crio-b31baea1846cf2670caa37b296904f3c15bb87f56a6a69beffdf005424ad74df WatchSource:0}: Error finding container b31baea1846cf2670caa37b296904f3c15bb87f56a6a69beffdf005424ad74df: Status 404 returned error can't find the container with id b31baea1846cf2670caa37b296904f3c15bb87f56a6a69beffdf005424ad74df Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.952687 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-z7zjw" event={"ID":"8319ac14-e61e-4c9f-b22a-b2f08d0a6723","Type":"ContainerDied","Data":"53382b497cd36a18a9e7d621e9638700c053600e2580effc765a24245bf895c8"} Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.953006 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53382b497cd36a18a9e7d621e9638700c053600e2580effc765a24245bf895c8" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.952748 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-z7zjw" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.956588 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t5tbp" event={"ID":"0f1f9808-a20d-4fdc-b2ea-586d3b917cc8","Type":"ContainerDied","Data":"91f9fd266d020a496114bdb620d13b01edf6ac1e63fd52245e2ae404f5f3d48c"} Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.956620 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t5tbp" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.956630 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f9fd266d020a496114bdb620d13b01edf6ac1e63fd52245e2ae404f5f3d48c" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.958131 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ee86-account-create-update-wj5mf" event={"ID":"d7312900-d50f-4b7a-9b16-fb9487c1ad62","Type":"ContainerDied","Data":"5bc1db21ea9f33655f913c83a982dcad5b3a5564bc828aaeeab8f7ddd51ac314"} Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.958212 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc1db21ea9f33655f913c83a982dcad5b3a5564bc828aaeeab8f7ddd51ac314" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.958269 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ee86-account-create-update-wj5mf" Jan 29 16:57:23 crc kubenswrapper[4746]: I0129 16:57:23.959963 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerStarted","Data":"b31baea1846cf2670caa37b296904f3c15bb87f56a6a69beffdf005424ad74df"} Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.048682 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.049056 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-log" containerID="cri-o://d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc" gracePeriod=30 Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.049445 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-httpd" containerID="cri-o://9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806" gracePeriod=30 Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.302238 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.459287 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69a2f8f-ab26-4e1a-abf9-8995d0f5f528" path="/var/lib/kubelet/pods/f69a2f8f-ab26-4e1a-abf9-8995d0f5f528/volumes" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.493753 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d29372-486b-4db2-8e2e-cd09059c9edc-operator-scripts\") pod \"e8d29372-486b-4db2-8e2e-cd09059c9edc\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.493821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r276\" (UniqueName: \"kubernetes.io/projected/e8d29372-486b-4db2-8e2e-cd09059c9edc-kube-api-access-2r276\") pod \"e8d29372-486b-4db2-8e2e-cd09059c9edc\" (UID: \"e8d29372-486b-4db2-8e2e-cd09059c9edc\") " Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.494324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d29372-486b-4db2-8e2e-cd09059c9edc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8d29372-486b-4db2-8e2e-cd09059c9edc" (UID: "e8d29372-486b-4db2-8e2e-cd09059c9edc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.494870 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8d29372-486b-4db2-8e2e-cd09059c9edc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.499567 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d29372-486b-4db2-8e2e-cd09059c9edc-kube-api-access-2r276" (OuterVolumeSpecName: "kube-api-access-2r276") pod "e8d29372-486b-4db2-8e2e-cd09059c9edc" (UID: "e8d29372-486b-4db2-8e2e-cd09059c9edc"). InnerVolumeSpecName "kube-api-access-2r276". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.545051 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.550588 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.596331 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r276\" (UniqueName: \"kubernetes.io/projected/e8d29372-486b-4db2-8e2e-cd09059c9edc-kube-api-access-2r276\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.697647 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-operator-scripts\") pod \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.697954 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n745g\" (UniqueName: \"kubernetes.io/projected/4d51988e-2959-4dc6-af55-b3c40c2428ee-kube-api-access-n745g\") pod \"4d51988e-2959-4dc6-af55-b3c40c2428ee\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.698602 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgrbc\" (UniqueName: \"kubernetes.io/projected/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-kube-api-access-tgrbc\") pod \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\" (UID: \"a3d3b8c6-5997-4881-8b92-b5244c49fd1c\") " Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.698784 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d51988e-2959-4dc6-af55-b3c40c2428ee-operator-scripts\") pod \"4d51988e-2959-4dc6-af55-b3c40c2428ee\" (UID: \"4d51988e-2959-4dc6-af55-b3c40c2428ee\") " Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.698500 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3d3b8c6-5997-4881-8b92-b5244c49fd1c" (UID: "a3d3b8c6-5997-4881-8b92-b5244c49fd1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.699492 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d51988e-2959-4dc6-af55-b3c40c2428ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d51988e-2959-4dc6-af55-b3c40c2428ee" (UID: "4d51988e-2959-4dc6-af55-b3c40c2428ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.700979 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d51988e-2959-4dc6-af55-b3c40c2428ee-kube-api-access-n745g" (OuterVolumeSpecName: "kube-api-access-n745g") pod "4d51988e-2959-4dc6-af55-b3c40c2428ee" (UID: "4d51988e-2959-4dc6-af55-b3c40c2428ee"). InnerVolumeSpecName "kube-api-access-n745g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.702468 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-kube-api-access-tgrbc" (OuterVolumeSpecName: "kube-api-access-tgrbc") pod "a3d3b8c6-5997-4881-8b92-b5244c49fd1c" (UID: "a3d3b8c6-5997-4881-8b92-b5244c49fd1c"). InnerVolumeSpecName "kube-api-access-tgrbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.800361 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d51988e-2959-4dc6-af55-b3c40c2428ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.800761 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.800841 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n745g\" (UniqueName: \"kubernetes.io/projected/4d51988e-2959-4dc6-af55-b3c40c2428ee-kube-api-access-n745g\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.800907 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgrbc\" (UniqueName: \"kubernetes.io/projected/a3d3b8c6-5997-4881-8b92-b5244c49fd1c-kube-api-access-tgrbc\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.973329 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lmdqh" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.975367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lmdqh" event={"ID":"4d51988e-2959-4dc6-af55-b3c40c2428ee","Type":"ContainerDied","Data":"210a98cf6e89a6223de964785e807bf8b947ef1c28df43589812937c3e89f0d6"} Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.975521 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="210a98cf6e89a6223de964785e807bf8b947ef1c28df43589812937c3e89f0d6" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.977390 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-628f-account-create-update-xfw92" event={"ID":"a3d3b8c6-5997-4881-8b92-b5244c49fd1c","Type":"ContainerDied","Data":"f453378012e42e8b33325c87505ca469d7cd7502ae4908d67d642ccde7192d31"} Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.977501 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f453378012e42e8b33325c87505ca469d7cd7502ae4908d67d642ccde7192d31" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.977618 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-628f-account-create-update-xfw92" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.981496 4746 generic.go:334] "Generic (PLEG): container finished" podID="0016fc12-9058-415e-be92-a37e69d56c58" containerID="d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc" exitCode=143 Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.981584 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0016fc12-9058-415e-be92-a37e69d56c58","Type":"ContainerDied","Data":"d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc"} Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.984095 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" event={"ID":"e8d29372-486b-4db2-8e2e-cd09059c9edc","Type":"ContainerDied","Data":"f262d3171e08b25892b1f6244663956fb57903d52c9c68616e6c4808b5fb3291"} Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.984124 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f262d3171e08b25892b1f6244663956fb57903d52c9c68616e6c4808b5fb3291" Jan 29 16:57:24 crc kubenswrapper[4746]: I0129 16:57:24.984176 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3ee3-account-create-update-wwrbf" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.154518 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.276749 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.276981 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-log" containerID="cri-o://066bbca36d5f6c3b28e4f47c6fe7cc6dbd45acfb34c1754e61e1c8ac6bafad31" gracePeriod=30 Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.277111 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-httpd" containerID="cri-o://a7091a22f772906d88642a0d2bc3ba5747d115f3c1694c2d9497b8a37ca3aaa2" gracePeriod=30 Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.664111 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rjblb"] Jan 29 16:57:25 crc kubenswrapper[4746]: E0129 16:57:25.665129 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d29372-486b-4db2-8e2e-cd09059c9edc" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.665247 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d29372-486b-4db2-8e2e-cd09059c9edc" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: E0129 16:57:25.665341 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7312900-d50f-4b7a-9b16-fb9487c1ad62" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.665415 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7312900-d50f-4b7a-9b16-fb9487c1ad62" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: E0129 16:57:25.665511 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d51988e-2959-4dc6-af55-b3c40c2428ee" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.665589 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d51988e-2959-4dc6-af55-b3c40c2428ee" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: E0129 16:57:25.665667 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.665739 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: E0129 16:57:25.665828 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8319ac14-e61e-4c9f-b22a-b2f08d0a6723" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.665898 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8319ac14-e61e-4c9f-b22a-b2f08d0a6723" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: E0129 16:57:25.665981 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d3b8c6-5997-4881-8b92-b5244c49fd1c" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666056 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d3b8c6-5997-4881-8b92-b5244c49fd1c" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666406 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666506 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7312900-d50f-4b7a-9b16-fb9487c1ad62" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666590 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d29372-486b-4db2-8e2e-cd09059c9edc" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666665 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d51988e-2959-4dc6-af55-b3c40c2428ee" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666740 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8319ac14-e61e-4c9f-b22a-b2f08d0a6723" containerName="mariadb-database-create" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.666833 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d3b8c6-5997-4881-8b92-b5244c49fd1c" containerName="mariadb-account-create-update" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.667647 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.669585 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.670099 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9ctqw" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.673227 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.677475 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rjblb"] Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.816387 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbf7p\" (UniqueName: \"kubernetes.io/projected/cb9c66b9-97e0-49b8-8229-2e90537ad349-kube-api-access-vbf7p\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.816807 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-config-data\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.816978 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.817105 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-scripts\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.919446 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-scripts\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.919832 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbf7p\" (UniqueName: \"kubernetes.io/projected/cb9c66b9-97e0-49b8-8229-2e90537ad349-kube-api-access-vbf7p\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.920308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-config-data\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.920888 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.928406 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.929048 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-config-data\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.930663 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-scripts\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.938802 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbf7p\" (UniqueName: \"kubernetes.io/projected/cb9c66b9-97e0-49b8-8229-2e90537ad349-kube-api-access-vbf7p\") pod \"nova-cell0-conductor-db-sync-rjblb\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.983388 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:25 crc kubenswrapper[4746]: I0129 16:57:25.996137 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerStarted","Data":"6fb08fe0c4a7e99b2f1f81da2326add975e1e86e9cffaa1c3817ab23611023a9"} Jan 29 16:57:26 crc kubenswrapper[4746]: I0129 16:57:26.023220 4746 generic.go:334] "Generic (PLEG): container finished" podID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerID="066bbca36d5f6c3b28e4f47c6fe7cc6dbd45acfb34c1754e61e1c8ac6bafad31" exitCode=143 Jan 29 16:57:26 crc kubenswrapper[4746]: I0129 16:57:26.023406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"338e74fb-ad8e-44b8-a56f-cb984371a8f4","Type":"ContainerDied","Data":"066bbca36d5f6c3b28e4f47c6fe7cc6dbd45acfb34c1754e61e1c8ac6bafad31"} Jan 29 16:57:26 crc kubenswrapper[4746]: I0129 16:57:26.670730 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rjblb"] Jan 29 16:57:26 crc kubenswrapper[4746]: W0129 16:57:26.683245 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb9c66b9_97e0_49b8_8229_2e90537ad349.slice/crio-0ba4957e5fd7c61a0366a017a436034bb1a850f0f1151ae267233be20f6a4bfe WatchSource:0}: Error finding container 0ba4957e5fd7c61a0366a017a436034bb1a850f0f1151ae267233be20f6a4bfe: Status 404 returned error can't find the container with id 0ba4957e5fd7c61a0366a017a436034bb1a850f0f1151ae267233be20f6a4bfe Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.037221 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rjblb" event={"ID":"cb9c66b9-97e0-49b8-8229-2e90537ad349","Type":"ContainerStarted","Data":"0ba4957e5fd7c61a0366a017a436034bb1a850f0f1151ae267233be20f6a4bfe"} Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.039612 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerStarted","Data":"1a5724a3064acb284a017f5b695afd63dd83e5c3fae5898b95e30aa881c5cb1a"} Jan 29 16:57:27 crc kubenswrapper[4746]: E0129 16:57:27.793665 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:57:27 crc kubenswrapper[4746]: E0129 16:57:27.794425 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtjjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cf6919a0-014b-4d03-bec1-54d64e5fdd85): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:27 crc kubenswrapper[4746]: E0129 16:57:27.796481 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.827231 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.957582 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-config-data\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.957630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-scripts\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.957723 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngjrt\" (UniqueName: \"kubernetes.io/projected/0016fc12-9058-415e-be92-a37e69d56c58-kube-api-access-ngjrt\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.957743 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-combined-ca-bundle\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.957782 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-httpd-run\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.958061 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-logs\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.958117 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-public-tls-certs\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.958161 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0016fc12-9058-415e-be92-a37e69d56c58\" (UID: \"0016fc12-9058-415e-be92-a37e69d56c58\") " Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.959783 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-logs" (OuterVolumeSpecName: "logs") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.960096 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.966717 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.969372 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-scripts" (OuterVolumeSpecName: "scripts") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:27 crc kubenswrapper[4746]: I0129 16:57:27.972801 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0016fc12-9058-415e-be92-a37e69d56c58-kube-api-access-ngjrt" (OuterVolumeSpecName: "kube-api-access-ngjrt") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "kube-api-access-ngjrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.006472 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.022360 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-config-data" (OuterVolumeSpecName: "config-data") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.044304 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0016fc12-9058-415e-be92-a37e69d56c58" (UID: "0016fc12-9058-415e-be92-a37e69d56c58"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.057138 4746 generic.go:334] "Generic (PLEG): container finished" podID="0016fc12-9058-415e-be92-a37e69d56c58" containerID="9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806" exitCode=0 Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.057212 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0016fc12-9058-415e-be92-a37e69d56c58","Type":"ContainerDied","Data":"9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806"} Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.057241 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0016fc12-9058-415e-be92-a37e69d56c58","Type":"ContainerDied","Data":"0a3b59f0d52b4e3bd4c899e1d565b6f131e3c90e8c6c77e4f6e030cf57378a20"} Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.057258 4746 scope.go:117] "RemoveContainer" containerID="9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.057369 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059765 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059796 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059809 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059865 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngjrt\" (UniqueName: \"kubernetes.io/projected/0016fc12-9058-415e-be92-a37e69d56c58-kube-api-access-ngjrt\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059877 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059888 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0016fc12-9058-415e-be92-a37e69d56c58-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059897 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0016fc12-9058-415e-be92-a37e69d56c58-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.059926 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.063958 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerStarted","Data":"11e27faf6e7c00379cf845ca269f360ff742b9b9751e38d091b738a9692219b5"} Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.064222 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-central-agent" containerID="cri-o://6fb08fe0c4a7e99b2f1f81da2326add975e1e86e9cffaa1c3817ab23611023a9" gracePeriod=30 Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.065102 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="sg-core" containerID="cri-o://11e27faf6e7c00379cf845ca269f360ff742b9b9751e38d091b738a9692219b5" gracePeriod=30 Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.065179 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-notification-agent" containerID="cri-o://1a5724a3064acb284a017f5b695afd63dd83e5c3fae5898b95e30aa881c5cb1a" gracePeriod=30 Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.095434 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.108793 4746 scope.go:117] "RemoveContainer" containerID="d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.125253 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.136544 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.141459 4746 scope.go:117] "RemoveContainer" containerID="9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806" Jan 29 16:57:28 crc kubenswrapper[4746]: E0129 16:57:28.143504 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806\": container with ID starting with 9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806 not found: ID does not exist" containerID="9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.143550 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806"} err="failed to get container status \"9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806\": rpc error: code = NotFound desc = could not find container \"9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806\": container with ID starting with 9d5a0164c57750d9c3c968fd28b18a3da08aed66a48a9983db738487d7f26806 not found: ID does not exist" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.143579 4746 scope.go:117] "RemoveContainer" containerID="d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc" Jan 29 16:57:28 crc kubenswrapper[4746]: E0129 16:57:28.143893 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc\": container with ID starting with d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc not found: ID does not exist" containerID="d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.143917 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc"} err="failed to get container status \"d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc\": rpc error: code = NotFound desc = could not find container \"d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc\": container with ID starting with d9c932304cb0b6d4b12b305abf77459fa1bc9f3d1f1547634e06f787aed9b1cc not found: ID does not exist" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.150844 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:28 crc kubenswrapper[4746]: E0129 16:57:28.151178 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-log" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.151215 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-log" Jan 29 16:57:28 crc kubenswrapper[4746]: E0129 16:57:28.151243 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-httpd" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.151251 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-httpd" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.151441 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-httpd" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.151459 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0016fc12-9058-415e-be92-a37e69d56c58" containerName="glance-log" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.153391 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.155757 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.156209 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.161345 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.163801 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.262881 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsfk\" (UniqueName: \"kubernetes.io/projected/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-kube-api-access-vxsfk\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.262927 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.262980 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.263029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.263074 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-logs\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.263096 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.263118 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.263136 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365369 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-logs\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365412 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365447 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365559 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsfk\" (UniqueName: \"kubernetes.io/projected/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-kube-api-access-vxsfk\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365645 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.365820 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.366149 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-logs\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.366260 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.366663 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.370405 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.370593 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.371163 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.373784 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.391690 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsfk\" (UniqueName: \"kubernetes.io/projected/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-kube-api-access-vxsfk\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.414597 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.458016 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0016fc12-9058-415e-be92-a37e69d56c58" path="/var/lib/kubelet/pods/0016fc12-9058-415e-be92-a37e69d56c58/volumes" Jan 29 16:57:28 crc kubenswrapper[4746]: I0129 16:57:28.481054 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.124428 4746 generic.go:334] "Generic (PLEG): container finished" podID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerID="a7091a22f772906d88642a0d2bc3ba5747d115f3c1694c2d9497b8a37ca3aaa2" exitCode=0 Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.125050 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"338e74fb-ad8e-44b8-a56f-cb984371a8f4","Type":"ContainerDied","Data":"a7091a22f772906d88642a0d2bc3ba5747d115f3c1694c2d9497b8a37ca3aaa2"} Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.130308 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.140684 4746 generic.go:334] "Generic (PLEG): container finished" podID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerID="11e27faf6e7c00379cf845ca269f360ff742b9b9751e38d091b738a9692219b5" exitCode=2 Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.140716 4746 generic.go:334] "Generic (PLEG): container finished" podID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerID="1a5724a3064acb284a017f5b695afd63dd83e5c3fae5898b95e30aa881c5cb1a" exitCode=0 Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.140768 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerDied","Data":"11e27faf6e7c00379cf845ca269f360ff742b9b9751e38d091b738a9692219b5"} Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.140794 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerDied","Data":"1a5724a3064acb284a017f5b695afd63dd83e5c3fae5898b95e30aa881c5cb1a"} Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.215223 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.289702 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.289891 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-logs\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.289916 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.289941 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-combined-ca-bundle\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.289978 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l922\" (UniqueName: \"kubernetes.io/projected/338e74fb-ad8e-44b8-a56f-cb984371a8f4-kube-api-access-4l922\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.290009 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-internal-tls-certs\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.290033 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-scripts\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.290074 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-httpd-run\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.290526 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-logs" (OuterVolumeSpecName: "logs") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.290858 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.300785 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/338e74fb-ad8e-44b8-a56f-cb984371a8f4-kube-api-access-4l922" (OuterVolumeSpecName: "kube-api-access-4l922") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "kube-api-access-4l922". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.300832 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.305280 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-scripts" (OuterVolumeSpecName: "scripts") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.341011 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: E0129 16:57:29.367719 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data podName:338e74fb-ad8e-44b8-a56f-cb984371a8f4 nodeName:}" failed. No retries permitted until 2026-01-29 16:57:29.867690395 +0000 UTC m=+1372.268275039 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4") : error deleting /var/lib/kubelet/pods/338e74fb-ad8e-44b8-a56f-cb984371a8f4/volume-subpaths: remove /var/lib/kubelet/pods/338e74fb-ad8e-44b8-a56f-cb984371a8f4/volume-subpaths: no such file or directory Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.371088 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392672 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392700 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392709 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l922\" (UniqueName: \"kubernetes.io/projected/338e74fb-ad8e-44b8-a56f-cb984371a8f4-kube-api-access-4l922\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392717 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392728 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392738 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/338e74fb-ad8e-44b8-a56f-cb984371a8f4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.392765 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.450998 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.495489 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.913437 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data\") pod \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\" (UID: \"338e74fb-ad8e-44b8-a56f-cb984371a8f4\") " Jan 29 16:57:29 crc kubenswrapper[4746]: I0129 16:57:29.918151 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data" (OuterVolumeSpecName: "config-data") pod "338e74fb-ad8e-44b8-a56f-cb984371a8f4" (UID: "338e74fb-ad8e-44b8-a56f-cb984371a8f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.016509 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/338e74fb-ad8e-44b8-a56f-cb984371a8f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.157373 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6","Type":"ContainerStarted","Data":"d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5"} Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.157430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6","Type":"ContainerStarted","Data":"d32d646933004e53aeb0b513f795313956ca77d247f42afe7d3d85e193ae1d3f"} Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.159501 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"338e74fb-ad8e-44b8-a56f-cb984371a8f4","Type":"ContainerDied","Data":"d9fe517129ac2760892fca8b5e75cd7c053f2bd571c0b99dceb1cdf635e997a8"} Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.159536 4746 scope.go:117] "RemoveContainer" containerID="a7091a22f772906d88642a0d2bc3ba5747d115f3c1694c2d9497b8a37ca3aaa2" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.159702 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.198765 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.199332 4746 scope.go:117] "RemoveContainer" containerID="066bbca36d5f6c3b28e4f47c6fe7cc6dbd45acfb34c1754e61e1c8ac6bafad31" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.207352 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.233767 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:30 crc kubenswrapper[4746]: E0129 16:57:30.234424 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-httpd" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.234511 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-httpd" Jan 29 16:57:30 crc kubenswrapper[4746]: E0129 16:57:30.234588 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-log" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.235109 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-log" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.235342 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-log" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.235422 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" containerName="glance-httpd" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.236435 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.240836 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.242329 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.269809 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.321731 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.321814 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.321850 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.321934 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.321979 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.322002 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhbr\" (UniqueName: \"kubernetes.io/projected/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-kube-api-access-lzhbr\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.322039 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.322079 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.423963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424049 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424080 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhbr\" (UniqueName: \"kubernetes.io/projected/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-kube-api-access-lzhbr\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424115 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424154 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424576 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-logs\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424867 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.424958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.425511 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.425883 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.438205 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.441767 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.442471 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.448712 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.449276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhbr\" (UniqueName: \"kubernetes.io/projected/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-kube-api-access-lzhbr\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.459106 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.492030 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="338e74fb-ad8e-44b8-a56f-cb984371a8f4" path="/var/lib/kubelet/pods/338e74fb-ad8e-44b8-a56f-cb984371a8f4/volumes" Jan 29 16:57:30 crc kubenswrapper[4746]: I0129 16:57:30.556324 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:31 crc kubenswrapper[4746]: I0129 16:57:31.173022 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:31 crc kubenswrapper[4746]: W0129 16:57:31.179362 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf93a42f7_a972_44c2_a2a4_5f698ba4caf7.slice/crio-4bec86215d0f7d79211bca00ac121bb4fa38755b694cd86890ca813ade593f54 WatchSource:0}: Error finding container 4bec86215d0f7d79211bca00ac121bb4fa38755b694cd86890ca813ade593f54: Status 404 returned error can't find the container with id 4bec86215d0f7d79211bca00ac121bb4fa38755b694cd86890ca813ade593f54 Jan 29 16:57:31 crc kubenswrapper[4746]: I0129 16:57:31.184030 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6","Type":"ContainerStarted","Data":"cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228"} Jan 29 16:57:31 crc kubenswrapper[4746]: I0129 16:57:31.220986 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.220964377 podStartE2EDuration="3.220964377s" podCreationTimestamp="2026-01-29 16:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:31.204734862 +0000 UTC m=+1373.605319516" watchObservedRunningTime="2026-01-29 16:57:31.220964377 +0000 UTC m=+1373.621549021" Jan 29 16:57:32 crc kubenswrapper[4746]: I0129 16:57:32.196547 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f93a42f7-a972-44c2-a2a4-5f698ba4caf7","Type":"ContainerStarted","Data":"61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31"} Jan 29 16:57:32 crc kubenswrapper[4746]: I0129 16:57:32.196886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f93a42f7-a972-44c2-a2a4-5f698ba4caf7","Type":"ContainerStarted","Data":"4bec86215d0f7d79211bca00ac121bb4fa38755b694cd86890ca813ade593f54"} Jan 29 16:57:35 crc kubenswrapper[4746]: I0129 16:57:35.234138 4746 generic.go:334] "Generic (PLEG): container finished" podID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerID="6fb08fe0c4a7e99b2f1f81da2326add975e1e86e9cffaa1c3817ab23611023a9" exitCode=0 Jan 29 16:57:35 crc kubenswrapper[4746]: I0129 16:57:35.234225 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerDied","Data":"6fb08fe0c4a7e99b2f1f81da2326add975e1e86e9cffaa1c3817ab23611023a9"} Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.652701 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749040 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtjjv\" (UniqueName: \"kubernetes.io/projected/cf6919a0-014b-4d03-bec1-54d64e5fdd85-kube-api-access-xtjjv\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749145 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-sg-core-conf-yaml\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749180 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-log-httpd\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749277 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749345 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-run-httpd\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749400 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-scripts\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749465 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-combined-ca-bundle\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.749923 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.750103 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.753313 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-scripts" (OuterVolumeSpecName: "scripts") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.755237 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6919a0-014b-4d03-bec1-54d64e5fdd85-kube-api-access-xtjjv" (OuterVolumeSpecName: "kube-api-access-xtjjv") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "kube-api-access-xtjjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.778130 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:36 crc kubenswrapper[4746]: E0129 16:57:36.803966 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data podName:cf6919a0-014b-4d03-bec1-54d64e5fdd85 nodeName:}" failed. No retries permitted until 2026-01-29 16:57:37.303924395 +0000 UTC m=+1379.704509049 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85") : error deleting /var/lib/kubelet/pods/cf6919a0-014b-4d03-bec1-54d64e5fdd85/volume-subpaths: remove /var/lib/kubelet/pods/cf6919a0-014b-4d03-bec1-54d64e5fdd85/volume-subpaths: no such file or directory Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.807513 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.854597 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.854644 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.854661 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtjjv\" (UniqueName: \"kubernetes.io/projected/cf6919a0-014b-4d03-bec1-54d64e5fdd85-kube-api-access-xtjjv\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.854672 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.854681 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:36 crc kubenswrapper[4746]: I0129 16:57:36.854691 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf6919a0-014b-4d03-bec1-54d64e5fdd85-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.259148 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rjblb" event={"ID":"cb9c66b9-97e0-49b8-8229-2e90537ad349","Type":"ContainerStarted","Data":"5a97250572ad990f099c81e7ee46a00c3f12562feabfb5aa66086e13ecd618cc"} Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.262147 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf6919a0-014b-4d03-bec1-54d64e5fdd85","Type":"ContainerDied","Data":"b31baea1846cf2670caa37b296904f3c15bb87f56a6a69beffdf005424ad74df"} Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.262223 4746 scope.go:117] "RemoveContainer" containerID="11e27faf6e7c00379cf845ca269f360ff742b9b9751e38d091b738a9692219b5" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.262219 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.264047 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f93a42f7-a972-44c2-a2a4-5f698ba4caf7","Type":"ContainerStarted","Data":"f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d"} Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.281309 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rjblb" podStartSLOduration=2.501728875 podStartE2EDuration="12.281292455s" podCreationTimestamp="2026-01-29 16:57:25 +0000 UTC" firstStartedPulling="2026-01-29 16:57:26.686074616 +0000 UTC m=+1369.086659250" lastFinishedPulling="2026-01-29 16:57:36.465638186 +0000 UTC m=+1378.866222830" observedRunningTime="2026-01-29 16:57:37.27780521 +0000 UTC m=+1379.678389874" watchObservedRunningTime="2026-01-29 16:57:37.281292455 +0000 UTC m=+1379.681877099" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.284551 4746 scope.go:117] "RemoveContainer" containerID="1a5724a3064acb284a017f5b695afd63dd83e5c3fae5898b95e30aa881c5cb1a" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.301361 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.301343585 podStartE2EDuration="7.301343585s" podCreationTimestamp="2026-01-29 16:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:37.29938241 +0000 UTC m=+1379.699967054" watchObservedRunningTime="2026-01-29 16:57:37.301343585 +0000 UTC m=+1379.701928229" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.304052 4746 scope.go:117] "RemoveContainer" containerID="6fb08fe0c4a7e99b2f1f81da2326add975e1e86e9cffaa1c3817ab23611023a9" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.362642 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data\") pod \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\" (UID: \"cf6919a0-014b-4d03-bec1-54d64e5fdd85\") " Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.371032 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data" (OuterVolumeSpecName: "config-data") pod "cf6919a0-014b-4d03-bec1-54d64e5fdd85" (UID: "cf6919a0-014b-4d03-bec1-54d64e5fdd85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.465213 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf6919a0-014b-4d03-bec1-54d64e5fdd85-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.629340 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.644898 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.657629 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:37 crc kubenswrapper[4746]: E0129 16:57:37.658578 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="sg-core" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.658596 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="sg-core" Jan 29 16:57:37 crc kubenswrapper[4746]: E0129 16:57:37.658617 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-notification-agent" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.658626 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-notification-agent" Jan 29 16:57:37 crc kubenswrapper[4746]: E0129 16:57:37.658636 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-central-agent" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.658643 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-central-agent" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.658882 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-central-agent" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.658897 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="ceilometer-notification-agent" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.658910 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" containerName="sg-core" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.661031 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.668528 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.668630 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.669933 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772217 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-config-data\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772301 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772418 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772490 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-scripts\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-run-httpd\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772567 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whq9f\" (UniqueName: \"kubernetes.io/projected/2174b2df-07c0-4a6b-9d20-ac32f03579e7-kube-api-access-whq9f\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.772614 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-log-httpd\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874118 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whq9f\" (UniqueName: \"kubernetes.io/projected/2174b2df-07c0-4a6b-9d20-ac32f03579e7-kube-api-access-whq9f\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874178 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-log-httpd\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874283 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-config-data\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874340 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874385 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874415 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-scripts\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874444 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-run-httpd\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-log-httpd\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.874915 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-run-httpd\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.878150 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.878312 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-scripts\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.886262 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-config-data\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.887559 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.891772 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whq9f\" (UniqueName: \"kubernetes.io/projected/2174b2df-07c0-4a6b-9d20-ac32f03579e7-kube-api-access-whq9f\") pod \"ceilometer-0\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " pod="openstack/ceilometer-0" Jan 29 16:57:37 crc kubenswrapper[4746]: I0129 16:57:37.987689 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:38 crc kubenswrapper[4746]: W0129 16:57:38.455331 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2174b2df_07c0_4a6b_9d20_ac32f03579e7.slice/crio-d4b8ea77c4f8ce3059c8ed96fa8eb6e18daab6d078b6425938b370a0daee5e2a WatchSource:0}: Error finding container d4b8ea77c4f8ce3059c8ed96fa8eb6e18daab6d078b6425938b370a0daee5e2a: Status 404 returned error can't find the container with id d4b8ea77c4f8ce3059c8ed96fa8eb6e18daab6d078b6425938b370a0daee5e2a Jan 29 16:57:38 crc kubenswrapper[4746]: I0129 16:57:38.457262 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6919a0-014b-4d03-bec1-54d64e5fdd85" path="/var/lib/kubelet/pods/cf6919a0-014b-4d03-bec1-54d64e5fdd85/volumes" Jan 29 16:57:38 crc kubenswrapper[4746]: I0129 16:57:38.459173 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:38 crc kubenswrapper[4746]: I0129 16:57:38.481230 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:57:38 crc kubenswrapper[4746]: I0129 16:57:38.483322 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:57:38 crc kubenswrapper[4746]: I0129 16:57:38.510088 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:57:38 crc kubenswrapper[4746]: I0129 16:57:38.527394 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:57:39 crc kubenswrapper[4746]: I0129 16:57:39.284815 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerStarted","Data":"58770bc10104e5e996d48804edc9c3074eaa5c55120cc98f83823d66949bd36e"} Jan 29 16:57:39 crc kubenswrapper[4746]: I0129 16:57:39.285407 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerStarted","Data":"d4b8ea77c4f8ce3059c8ed96fa8eb6e18daab6d078b6425938b370a0daee5e2a"} Jan 29 16:57:39 crc kubenswrapper[4746]: I0129 16:57:39.285901 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:57:39 crc kubenswrapper[4746]: I0129 16:57:39.285952 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:57:40 crc kubenswrapper[4746]: I0129 16:57:40.296162 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerStarted","Data":"d10dfdf6dd41a21d25df6b5e2048784c5020b564691261918368d7507f0efd42"} Jan 29 16:57:40 crc kubenswrapper[4746]: I0129 16:57:40.557497 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:40 crc kubenswrapper[4746]: I0129 16:57:40.559612 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:40 crc kubenswrapper[4746]: I0129 16:57:40.587341 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:40 crc kubenswrapper[4746]: I0129 16:57:40.601331 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:41 crc kubenswrapper[4746]: I0129 16:57:41.304933 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:57:41 crc kubenswrapper[4746]: I0129 16:57:41.305279 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:57:41 crc kubenswrapper[4746]: I0129 16:57:41.306584 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:41 crc kubenswrapper[4746]: I0129 16:57:41.306641 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:41 crc kubenswrapper[4746]: I0129 16:57:41.418959 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:57:41 crc kubenswrapper[4746]: I0129 16:57:41.426510 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:57:42 crc kubenswrapper[4746]: I0129 16:57:42.389518 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:43 crc kubenswrapper[4746]: I0129 16:57:43.327456 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:43 crc kubenswrapper[4746]: I0129 16:57:43.327953 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:57:43 crc kubenswrapper[4746]: I0129 16:57:43.487119 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:43 crc kubenswrapper[4746]: E0129 16:57:43.556438 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:57:43 crc kubenswrapper[4746]: E0129 16:57:43.556585 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-whq9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2174b2df-07c0-4a6b-9d20-ac32f03579e7): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:43 crc kubenswrapper[4746]: E0129 16:57:43.557781 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" Jan 29 16:57:44 crc kubenswrapper[4746]: I0129 16:57:44.332263 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerStarted","Data":"afa2ee88234ce58e8655e7e4f68d471866bf0799376edfc087a444cef22443d1"} Jan 29 16:57:44 crc kubenswrapper[4746]: I0129 16:57:44.332397 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="sg-core" containerID="cri-o://afa2ee88234ce58e8655e7e4f68d471866bf0799376edfc087a444cef22443d1" gracePeriod=30 Jan 29 16:57:44 crc kubenswrapper[4746]: I0129 16:57:44.332428 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-central-agent" containerID="cri-o://58770bc10104e5e996d48804edc9c3074eaa5c55120cc98f83823d66949bd36e" gracePeriod=30 Jan 29 16:57:44 crc kubenswrapper[4746]: I0129 16:57:44.332453 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-notification-agent" containerID="cri-o://d10dfdf6dd41a21d25df6b5e2048784c5020b564691261918368d7507f0efd42" gracePeriod=30 Jan 29 16:57:45 crc kubenswrapper[4746]: I0129 16:57:45.349505 4746 generic.go:334] "Generic (PLEG): container finished" podID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerID="afa2ee88234ce58e8655e7e4f68d471866bf0799376edfc087a444cef22443d1" exitCode=2 Jan 29 16:57:45 crc kubenswrapper[4746]: I0129 16:57:45.349551 4746 generic.go:334] "Generic (PLEG): container finished" podID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerID="d10dfdf6dd41a21d25df6b5e2048784c5020b564691261918368d7507f0efd42" exitCode=0 Jan 29 16:57:45 crc kubenswrapper[4746]: I0129 16:57:45.349558 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerDied","Data":"afa2ee88234ce58e8655e7e4f68d471866bf0799376edfc087a444cef22443d1"} Jan 29 16:57:45 crc kubenswrapper[4746]: I0129 16:57:45.349655 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerDied","Data":"d10dfdf6dd41a21d25df6b5e2048784c5020b564691261918368d7507f0efd42"} Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.363530 4746 generic.go:334] "Generic (PLEG): container finished" podID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerID="58770bc10104e5e996d48804edc9c3074eaa5c55120cc98f83823d66949bd36e" exitCode=0 Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.363601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerDied","Data":"58770bc10104e5e996d48804edc9c3074eaa5c55120cc98f83823d66949bd36e"} Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.605318 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.760882 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-combined-ca-bundle\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.760950 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-config-data\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.760977 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-scripts\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.761074 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-run-httpd\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.761097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-sg-core-conf-yaml\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.761159 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whq9f\" (UniqueName: \"kubernetes.io/projected/2174b2df-07c0-4a6b-9d20-ac32f03579e7-kube-api-access-whq9f\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.761224 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-log-httpd\") pod \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\" (UID: \"2174b2df-07c0-4a6b-9d20-ac32f03579e7\") " Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.761926 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.761940 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.767334 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-scripts" (OuterVolumeSpecName: "scripts") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.768113 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2174b2df-07c0-4a6b-9d20-ac32f03579e7-kube-api-access-whq9f" (OuterVolumeSpecName: "kube-api-access-whq9f") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "kube-api-access-whq9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.791157 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.806901 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-config-data" (OuterVolumeSpecName: "config-data") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.814530 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2174b2df-07c0-4a6b-9d20-ac32f03579e7" (UID: "2174b2df-07c0-4a6b-9d20-ac32f03579e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863445 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863480 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863490 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863500 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whq9f\" (UniqueName: \"kubernetes.io/projected/2174b2df-07c0-4a6b-9d20-ac32f03579e7-kube-api-access-whq9f\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863509 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2174b2df-07c0-4a6b-9d20-ac32f03579e7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863520 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:46 crc kubenswrapper[4746]: I0129 16:57:46.863528 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2174b2df-07c0-4a6b-9d20-ac32f03579e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.378517 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2174b2df-07c0-4a6b-9d20-ac32f03579e7","Type":"ContainerDied","Data":"d4b8ea77c4f8ce3059c8ed96fa8eb6e18daab6d078b6425938b370a0daee5e2a"} Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.378590 4746 scope.go:117] "RemoveContainer" containerID="afa2ee88234ce58e8655e7e4f68d471866bf0799376edfc087a444cef22443d1" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.378634 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.399626 4746 scope.go:117] "RemoveContainer" containerID="d10dfdf6dd41a21d25df6b5e2048784c5020b564691261918368d7507f0efd42" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.438515 4746 scope.go:117] "RemoveContainer" containerID="58770bc10104e5e996d48804edc9c3074eaa5c55120cc98f83823d66949bd36e" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.458359 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.471496 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.481770 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:47 crc kubenswrapper[4746]: E0129 16:57:47.482324 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-central-agent" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.482342 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-central-agent" Jan 29 16:57:47 crc kubenswrapper[4746]: E0129 16:57:47.482373 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="sg-core" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.482382 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="sg-core" Jan 29 16:57:47 crc kubenswrapper[4746]: E0129 16:57:47.482401 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-notification-agent" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.482408 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-notification-agent" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.482646 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-notification-agent" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.482665 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="ceilometer-central-agent" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.482689 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" containerName="sg-core" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.484914 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.487642 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.489163 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.490875 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.576927 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-run-httpd\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.577264 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.577293 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdfdx\" (UniqueName: \"kubernetes.io/projected/12c7a413-1e2f-49b3-96b3-28755f853464-kube-api-access-tdfdx\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.577357 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-log-httpd\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.577389 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-config-data\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.577415 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.577468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-scripts\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679327 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-config-data\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679373 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-scripts\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679490 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-run-httpd\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679518 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679538 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdfdx\" (UniqueName: \"kubernetes.io/projected/12c7a413-1e2f-49b3-96b3-28755f853464-kube-api-access-tdfdx\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-log-httpd\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.679976 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-log-httpd\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.681686 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-run-httpd\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.684675 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.685975 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.687711 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-config-data\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.696150 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-scripts\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.701985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdfdx\" (UniqueName: \"kubernetes.io/projected/12c7a413-1e2f-49b3-96b3-28755f853464-kube-api-access-tdfdx\") pod \"ceilometer-0\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4746]: I0129 16:57:47.807755 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:48 crc kubenswrapper[4746]: I0129 16:57:48.072710 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:48 crc kubenswrapper[4746]: W0129 16:57:48.077564 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12c7a413_1e2f_49b3_96b3_28755f853464.slice/crio-355cf0d49f3923481a5c02888768f7c7d51e4ecdc4f70012f6ec0f37201dafb0 WatchSource:0}: Error finding container 355cf0d49f3923481a5c02888768f7c7d51e4ecdc4f70012f6ec0f37201dafb0: Status 404 returned error can't find the container with id 355cf0d49f3923481a5c02888768f7c7d51e4ecdc4f70012f6ec0f37201dafb0 Jan 29 16:57:48 crc kubenswrapper[4746]: I0129 16:57:48.388222 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerStarted","Data":"355cf0d49f3923481a5c02888768f7c7d51e4ecdc4f70012f6ec0f37201dafb0"} Jan 29 16:57:48 crc kubenswrapper[4746]: I0129 16:57:48.456108 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2174b2df-07c0-4a6b-9d20-ac32f03579e7" path="/var/lib/kubelet/pods/2174b2df-07c0-4a6b-9d20-ac32f03579e7/volumes" Jan 29 16:57:49 crc kubenswrapper[4746]: I0129 16:57:49.400826 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerStarted","Data":"827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856"} Jan 29 16:57:50 crc kubenswrapper[4746]: E0129 16:57:50.203716 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb9c66b9_97e0_49b8_8229_2e90537ad349.slice/crio-conmon-5a97250572ad990f099c81e7ee46a00c3f12562feabfb5aa66086e13ecd618cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb9c66b9_97e0_49b8_8229_2e90537ad349.slice/crio-5a97250572ad990f099c81e7ee46a00c3f12562feabfb5aa66086e13ecd618cc.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:57:50 crc kubenswrapper[4746]: I0129 16:57:50.424130 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerStarted","Data":"218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315"} Jan 29 16:57:50 crc kubenswrapper[4746]: I0129 16:57:50.424558 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerStarted","Data":"c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0"} Jan 29 16:57:50 crc kubenswrapper[4746]: I0129 16:57:50.428144 4746 generic.go:334] "Generic (PLEG): container finished" podID="cb9c66b9-97e0-49b8-8229-2e90537ad349" containerID="5a97250572ad990f099c81e7ee46a00c3f12562feabfb5aa66086e13ecd618cc" exitCode=0 Jan 29 16:57:50 crc kubenswrapper[4746]: I0129 16:57:50.428214 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rjblb" event={"ID":"cb9c66b9-97e0-49b8-8229-2e90537ad349","Type":"ContainerDied","Data":"5a97250572ad990f099c81e7ee46a00c3f12562feabfb5aa66086e13ecd618cc"} Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.753428 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.857694 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-combined-ca-bundle\") pod \"cb9c66b9-97e0-49b8-8229-2e90537ad349\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.858094 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-scripts\") pod \"cb9c66b9-97e0-49b8-8229-2e90537ad349\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.858286 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbf7p\" (UniqueName: \"kubernetes.io/projected/cb9c66b9-97e0-49b8-8229-2e90537ad349-kube-api-access-vbf7p\") pod \"cb9c66b9-97e0-49b8-8229-2e90537ad349\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.858311 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-config-data\") pod \"cb9c66b9-97e0-49b8-8229-2e90537ad349\" (UID: \"cb9c66b9-97e0-49b8-8229-2e90537ad349\") " Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.863127 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb9c66b9-97e0-49b8-8229-2e90537ad349-kube-api-access-vbf7p" (OuterVolumeSpecName: "kube-api-access-vbf7p") pod "cb9c66b9-97e0-49b8-8229-2e90537ad349" (UID: "cb9c66b9-97e0-49b8-8229-2e90537ad349"). InnerVolumeSpecName "kube-api-access-vbf7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.863201 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-scripts" (OuterVolumeSpecName: "scripts") pod "cb9c66b9-97e0-49b8-8229-2e90537ad349" (UID: "cb9c66b9-97e0-49b8-8229-2e90537ad349"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.887340 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb9c66b9-97e0-49b8-8229-2e90537ad349" (UID: "cb9c66b9-97e0-49b8-8229-2e90537ad349"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.901400 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-config-data" (OuterVolumeSpecName: "config-data") pod "cb9c66b9-97e0-49b8-8229-2e90537ad349" (UID: "cb9c66b9-97e0-49b8-8229-2e90537ad349"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.960491 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbf7p\" (UniqueName: \"kubernetes.io/projected/cb9c66b9-97e0-49b8-8229-2e90537ad349-kube-api-access-vbf7p\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.960535 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.960547 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:51 crc kubenswrapper[4746]: I0129 16:57:51.960558 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb9c66b9-97e0-49b8-8229-2e90537ad349-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.453881 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rjblb" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.458437 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rjblb" event={"ID":"cb9c66b9-97e0-49b8-8229-2e90537ad349","Type":"ContainerDied","Data":"0ba4957e5fd7c61a0366a017a436034bb1a850f0f1151ae267233be20f6a4bfe"} Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.458486 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba4957e5fd7c61a0366a017a436034bb1a850f0f1151ae267233be20f6a4bfe" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.554677 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:57:52 crc kubenswrapper[4746]: E0129 16:57:52.555082 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb9c66b9-97e0-49b8-8229-2e90537ad349" containerName="nova-cell0-conductor-db-sync" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.555102 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb9c66b9-97e0-49b8-8229-2e90537ad349" containerName="nova-cell0-conductor-db-sync" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.555350 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb9c66b9-97e0-49b8-8229-2e90537ad349" containerName="nova-cell0-conductor-db-sync" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.555887 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.564035 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.564365 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9ctqw" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.584299 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.675210 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.675356 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.675384 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg44n\" (UniqueName: \"kubernetes.io/projected/2cbc4caa-43b8-42c2-83ae-e2448dda745f-kube-api-access-jg44n\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.780600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.780752 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg44n\" (UniqueName: \"kubernetes.io/projected/2cbc4caa-43b8-42c2-83ae-e2448dda745f-kube-api-access-jg44n\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.781097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.796508 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.796622 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.805720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg44n\" (UniqueName: \"kubernetes.io/projected/2cbc4caa-43b8-42c2-83ae-e2448dda745f-kube-api-access-jg44n\") pod \"nova-cell0-conductor-0\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:52 crc kubenswrapper[4746]: I0129 16:57:52.893840 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4746]: I0129 16:57:53.404422 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:57:53 crc kubenswrapper[4746]: W0129 16:57:53.418057 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cbc4caa_43b8_42c2_83ae_e2448dda745f.slice/crio-79a3841462637483f641720d70a1a5a9eaca2ce3661ebb0be92c29db0c754c5d WatchSource:0}: Error finding container 79a3841462637483f641720d70a1a5a9eaca2ce3661ebb0be92c29db0c754c5d: Status 404 returned error can't find the container with id 79a3841462637483f641720d70a1a5a9eaca2ce3661ebb0be92c29db0c754c5d Jan 29 16:57:53 crc kubenswrapper[4746]: I0129 16:57:53.464536 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2cbc4caa-43b8-42c2-83ae-e2448dda745f","Type":"ContainerStarted","Data":"79a3841462637483f641720d70a1a5a9eaca2ce3661ebb0be92c29db0c754c5d"} Jan 29 16:57:54 crc kubenswrapper[4746]: I0129 16:57:54.477032 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2cbc4caa-43b8-42c2-83ae-e2448dda745f","Type":"ContainerStarted","Data":"35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c"} Jan 29 16:57:54 crc kubenswrapper[4746]: I0129 16:57:54.478902 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:54 crc kubenswrapper[4746]: I0129 16:57:54.493777 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.493756653 podStartE2EDuration="2.493756653s" podCreationTimestamp="2026-01-29 16:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:54.492350675 +0000 UTC m=+1396.892935339" watchObservedRunningTime="2026-01-29 16:57:54.493756653 +0000 UTC m=+1396.894341297" Jan 29 16:57:59 crc kubenswrapper[4746]: I0129 16:57:59.535459 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerStarted","Data":"eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c"} Jan 29 16:57:59 crc kubenswrapper[4746]: I0129 16:57:59.537282 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:57:59 crc kubenswrapper[4746]: I0129 16:57:59.569293 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.754716208 podStartE2EDuration="12.569249357s" podCreationTimestamp="2026-01-29 16:57:47 +0000 UTC" firstStartedPulling="2026-01-29 16:57:48.081683936 +0000 UTC m=+1390.482268580" lastFinishedPulling="2026-01-29 16:57:58.896217085 +0000 UTC m=+1401.296801729" observedRunningTime="2026-01-29 16:57:59.557415763 +0000 UTC m=+1401.958000447" watchObservedRunningTime="2026-01-29 16:57:59.569249357 +0000 UTC m=+1401.969834011" Jan 29 16:58:02 crc kubenswrapper[4746]: I0129 16:58:02.920543 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.357508 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rwqs5"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.358560 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.361328 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.363089 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.375588 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwqs5"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.485476 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-scripts\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.485892 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-config-data\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.485929 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszkq\" (UniqueName: \"kubernetes.io/projected/dcbd102d-2909-4060-a027-5ebcc13063fb-kube-api-access-rszkq\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.486131 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.538153 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.539316 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.548818 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.555012 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.590317 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.590412 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-scripts\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.590482 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-config-data\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.590512 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszkq\" (UniqueName: \"kubernetes.io/projected/dcbd102d-2909-4060-a027-5ebcc13063fb-kube-api-access-rszkq\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.601495 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.602958 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.607831 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-scripts\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.608724 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.616813 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.619803 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-config-data\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.665680 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.667097 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.674560 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.695254 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-config-data\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.695320 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.695354 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwjd\" (UniqueName: \"kubernetes.io/projected/8cd00935-6c5d-44a3-ba21-dd9ab842117f-kube-api-access-pjwjd\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.695369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.695389 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8gr\" (UniqueName: \"kubernetes.io/projected/ca384131-3efa-43c4-b89c-006e62e467d0-kube-api-access-np8gr\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.695433 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.700151 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszkq\" (UniqueName: \"kubernetes.io/projected/dcbd102d-2909-4060-a027-5ebcc13063fb-kube-api-access-rszkq\") pod \"nova-cell0-cell-mapping-rwqs5\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.727882 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.781460 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797732 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-config-data\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797788 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-config-data\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797840 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797860 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwjd\" (UniqueName: \"kubernetes.io/projected/8cd00935-6c5d-44a3-ba21-dd9ab842117f-kube-api-access-pjwjd\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797877 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797891 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8gr\" (UniqueName: \"kubernetes.io/projected/ca384131-3efa-43c4-b89c-006e62e467d0-kube-api-access-np8gr\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797931 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797951 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0a053c-6e7f-4c08-84ed-f1c908d76718-logs\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.797980 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.798004 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqn7j\" (UniqueName: \"kubernetes.io/projected/1e0a053c-6e7f-4c08-84ed-f1c908d76718-kube-api-access-rqn7j\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.815566 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.815725 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.819970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-config-data\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.820994 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.827673 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8f7h7"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.829102 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.843740 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8f7h7"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.854047 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8gr\" (UniqueName: \"kubernetes.io/projected/ca384131-3efa-43c4-b89c-006e62e467d0-kube-api-access-np8gr\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.864920 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwjd\" (UniqueName: \"kubernetes.io/projected/8cd00935-6c5d-44a3-ba21-dd9ab842117f-kube-api-access-pjwjd\") pod \"nova-scheduler-0\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.878245 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.880366 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.888544 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899638 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqn7j\" (UniqueName: \"kubernetes.io/projected/1e0a053c-6e7f-4c08-84ed-f1c908d76718-kube-api-access-rqn7j\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899713 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4kd\" (UniqueName: \"kubernetes.io/projected/ced428a8-c8f4-4de1-89b7-965b4360f35d-kube-api-access-6n4kd\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899758 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-config-data\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899792 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-config\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899811 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899836 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899840 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.899851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.900013 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.900045 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0a053c-6e7f-4c08-84ed-f1c908d76718-logs\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.900091 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.911660 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0a053c-6e7f-4c08-84ed-f1c908d76718-logs\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.912041 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.913388 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.932094 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-config-data\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.989923 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.991377 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:03 crc kubenswrapper[4746]: I0129 16:58:03.993435 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqn7j\" (UniqueName: \"kubernetes.io/projected/1e0a053c-6e7f-4c08-84ed-f1c908d76718-kube-api-access-rqn7j\") pod \"nova-metadata-0\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.007970 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n4kd\" (UniqueName: \"kubernetes.io/projected/ced428a8-c8f4-4de1-89b7-965b4360f35d-kube-api-access-6n4kd\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.008066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aab9e1-1ed2-4993-b6a5-514dce216afd-logs\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.008593 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-config\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.008628 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.008671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.009165 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.009508 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.009562 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-config-data\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.009592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bm2p\" (UniqueName: \"kubernetes.io/projected/73aab9e1-1ed2-4993-b6a5-514dce216afd-kube-api-access-6bm2p\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.009623 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.010973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.011554 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.013854 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.016297 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.017671 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.019351 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-config\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.028689 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n4kd\" (UniqueName: \"kubernetes.io/projected/ced428a8-c8f4-4de1-89b7-965b4360f35d-kube-api-access-6n4kd\") pod \"dnsmasq-dns-557bbc7df7-8f7h7\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.113274 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aab9e1-1ed2-4993-b6a5-514dce216afd-logs\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.113755 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aab9e1-1ed2-4993-b6a5-514dce216afd-logs\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.115078 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.115169 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-config-data\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.115475 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bm2p\" (UniqueName: \"kubernetes.io/projected/73aab9e1-1ed2-4993-b6a5-514dce216afd-kube-api-access-6bm2p\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.122789 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.125639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-config-data\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.140029 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bm2p\" (UniqueName: \"kubernetes.io/projected/73aab9e1-1ed2-4993-b6a5-514dce216afd-kube-api-access-6bm2p\") pod \"nova-api-0\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.327567 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.344211 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4746]: W0129 16:58:04.429846 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cd00935_6c5d_44a3_ba21_dd9ab842117f.slice/crio-5ba6195429d44fb8d6dc70d03aed6f530e63878f4c58aaf0857cc7683db3e641 WatchSource:0}: Error finding container 5ba6195429d44fb8d6dc70d03aed6f530e63878f4c58aaf0857cc7683db3e641: Status 404 returned error can't find the container with id 5ba6195429d44fb8d6dc70d03aed6f530e63878f4c58aaf0857cc7683db3e641 Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.432651 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.436798 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.600680 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cd00935-6c5d-44a3-ba21-dd9ab842117f","Type":"ContainerStarted","Data":"5ba6195429d44fb8d6dc70d03aed6f530e63878f4c58aaf0857cc7683db3e641"} Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.628752 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.639935 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q96jb"] Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.641334 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: W0129 16:58:04.642637 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565 WatchSource:0}: Error finding container 1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565: Status 404 returned error can't find the container with id 1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565 Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.643485 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.643651 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.653659 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q96jb"] Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.714847 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwqs5"] Jan 29 16:58:04 crc kubenswrapper[4746]: W0129 16:58:04.735535 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a WatchSource:0}: Error finding container 4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a: Status 404 returned error can't find the container with id 4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.745799 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.837621 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-config-data\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.837730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-scripts\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.837816 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvx5j\" (UniqueName: \"kubernetes.io/projected/bf970538-d73e-48d7-b242-081dd3eacf7d-kube-api-access-cvx5j\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.837851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.902761 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8f7h7"] Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.939367 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-config-data\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.939692 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-scripts\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.939762 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvx5j\" (UniqueName: \"kubernetes.io/projected/bf970538-d73e-48d7-b242-081dd3eacf7d-kube-api-access-cvx5j\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.939794 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.948015 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-scripts\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.948346 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-config-data\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.952863 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:04 crc kubenswrapper[4746]: I0129 16:58:04.962807 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvx5j\" (UniqueName: \"kubernetes.io/projected/bf970538-d73e-48d7-b242-081dd3eacf7d-kube-api-access-cvx5j\") pod \"nova-cell1-conductor-db-sync-q96jb\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.001905 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.174736 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.620403 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e0a053c-6e7f-4c08-84ed-f1c908d76718","Type":"ContainerStarted","Data":"4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.622408 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q96jb"] Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.623181 4746 generic.go:334] "Generic (PLEG): container finished" podID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerID="c8656546478de17c0899fda5f6e66807296b60fedfeef0a8f00e903f198cf6ea" exitCode=0 Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.623268 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" event={"ID":"ced428a8-c8f4-4de1-89b7-965b4360f35d","Type":"ContainerDied","Data":"c8656546478de17c0899fda5f6e66807296b60fedfeef0a8f00e903f198cf6ea"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.623308 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" event={"ID":"ced428a8-c8f4-4de1-89b7-965b4360f35d","Type":"ContainerStarted","Data":"ba7ee0f54d4d62d1a02cbf0750046c154f58a8c8fa91b2ffc4635578e58938e9"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.628834 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwqs5" event={"ID":"dcbd102d-2909-4060-a027-5ebcc13063fb","Type":"ContainerStarted","Data":"5621af2d5539e35318ee2f2c35d249a21df7981eeee1f5046ada00d4056a1baa"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.628867 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwqs5" event={"ID":"dcbd102d-2909-4060-a027-5ebcc13063fb","Type":"ContainerStarted","Data":"4d2b67a6757bb18691a909d1de4692fff020df98ff9c6e776013c0e42f52e7d2"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.630490 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca384131-3efa-43c4-b89c-006e62e467d0","Type":"ContainerStarted","Data":"1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.631647 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73aab9e1-1ed2-4993-b6a5-514dce216afd","Type":"ContainerStarted","Data":"c50b0f0b63cb44ddd10a5866e6c7bd433e1e4e8f5a51585c0464f3908abd161b"} Jan 29 16:58:05 crc kubenswrapper[4746]: I0129 16:58:05.683178 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rwqs5" podStartSLOduration=2.683160573 podStartE2EDuration="2.683160573s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:05.662695133 +0000 UTC m=+1408.063279777" watchObservedRunningTime="2026-01-29 16:58:05.683160573 +0000 UTC m=+1408.083745217" Jan 29 16:58:05 crc kubenswrapper[4746]: W0129 16:58:05.920813 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf970538_d73e_48d7_b242_081dd3eacf7d.slice/crio-647b7ba251cd8472f6441344387039389dd26d843ea45206c8a2ace77fbf95e3 WatchSource:0}: Error finding container 647b7ba251cd8472f6441344387039389dd26d843ea45206c8a2ace77fbf95e3: Status 404 returned error can't find the container with id 647b7ba251cd8472f6441344387039389dd26d843ea45206c8a2ace77fbf95e3 Jan 29 16:58:06 crc kubenswrapper[4746]: I0129 16:58:06.645217 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q96jb" event={"ID":"bf970538-d73e-48d7-b242-081dd3eacf7d","Type":"ContainerStarted","Data":"647b7ba251cd8472f6441344387039389dd26d843ea45206c8a2ace77fbf95e3"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.308002 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.328088 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.675958 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-log" containerID="cri-o://8ccbc80a4f44bd3867a165ceb05bc702002c328fc89c8be74ef7c2b80a85893d" gracePeriod=30 Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.676340 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-metadata" containerID="cri-o://4509b6065caa050a2798bb51537627795e31890f52e084af90edd17f47baad05" gracePeriod=30 Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.676347 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e0a053c-6e7f-4c08-84ed-f1c908d76718","Type":"ContainerStarted","Data":"4509b6065caa050a2798bb51537627795e31890f52e084af90edd17f47baad05"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.676421 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e0a053c-6e7f-4c08-84ed-f1c908d76718","Type":"ContainerStarted","Data":"8ccbc80a4f44bd3867a165ceb05bc702002c328fc89c8be74ef7c2b80a85893d"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.687646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" event={"ID":"ced428a8-c8f4-4de1-89b7-965b4360f35d","Type":"ContainerStarted","Data":"a6ca57c1b1427d4152c2d3d29d17abec1ff2930930f94171eb6a4832a28e0ff4"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.687782 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.689166 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca384131-3efa-43c4-b89c-006e62e467d0","Type":"ContainerStarted","Data":"c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.689255 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ca384131-3efa-43c4-b89c-006e62e467d0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577" gracePeriod=30 Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.694658 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cd00935-6c5d-44a3-ba21-dd9ab842117f","Type":"ContainerStarted","Data":"e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.700905 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q96jb" event={"ID":"bf970538-d73e-48d7-b242-081dd3eacf7d","Type":"ContainerStarted","Data":"ae31072ec8addf54fcf59db27f019a8ca754b118d7d33ba75d44337ae689a8b2"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.708895 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73aab9e1-1ed2-4993-b6a5-514dce216afd","Type":"ContainerStarted","Data":"8dacc955812e306a9aaeb45e046d73221a37a83950672006244f5c7f01cd3ac6"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.708939 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73aab9e1-1ed2-4993-b6a5-514dce216afd","Type":"ContainerStarted","Data":"4fffcf797a263576242c604f4d0715bd4c171d6e8ef1d66a7a116921443972a9"} Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.710226 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.475146114 podStartE2EDuration="4.710211017s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="2026-01-29 16:58:04.743116126 +0000 UTC m=+1407.143700770" lastFinishedPulling="2026-01-29 16:58:06.978181029 +0000 UTC m=+1409.378765673" observedRunningTime="2026-01-29 16:58:07.700953394 +0000 UTC m=+1410.101538038" watchObservedRunningTime="2026-01-29 16:58:07.710211017 +0000 UTC m=+1410.110795661" Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.732889 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.212107756 podStartE2EDuration="4.732867498s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="2026-01-29 16:58:04.432456363 +0000 UTC m=+1406.833041007" lastFinishedPulling="2026-01-29 16:58:06.953216075 +0000 UTC m=+1409.353800749" observedRunningTime="2026-01-29 16:58:07.721217199 +0000 UTC m=+1410.121801843" watchObservedRunningTime="2026-01-29 16:58:07.732867498 +0000 UTC m=+1410.133452142" Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.743053 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.435934799 podStartE2EDuration="4.743033257s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="2026-01-29 16:58:04.646038095 +0000 UTC m=+1407.046622739" lastFinishedPulling="2026-01-29 16:58:06.953136553 +0000 UTC m=+1409.353721197" observedRunningTime="2026-01-29 16:58:07.737751932 +0000 UTC m=+1410.138336586" watchObservedRunningTime="2026-01-29 16:58:07.743033257 +0000 UTC m=+1410.143617901" Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.754349 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-q96jb" podStartSLOduration=3.754330646 podStartE2EDuration="3.754330646s" podCreationTimestamp="2026-01-29 16:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:07.751396956 +0000 UTC m=+1410.151981600" watchObservedRunningTime="2026-01-29 16:58:07.754330646 +0000 UTC m=+1410.154915290" Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.773121 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" podStartSLOduration=4.77310055 podStartE2EDuration="4.77310055s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:07.769843261 +0000 UTC m=+1410.170427915" watchObservedRunningTime="2026-01-29 16:58:07.77310055 +0000 UTC m=+1410.173685194" Jan 29 16:58:07 crc kubenswrapper[4746]: I0129 16:58:07.788527 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.806890725 podStartE2EDuration="4.788485602s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="2026-01-29 16:58:04.998614157 +0000 UTC m=+1407.399198791" lastFinishedPulling="2026-01-29 16:58:06.980209024 +0000 UTC m=+1409.380793668" observedRunningTime="2026-01-29 16:58:07.786896198 +0000 UTC m=+1410.187480842" watchObservedRunningTime="2026-01-29 16:58:07.788485602 +0000 UTC m=+1410.189070266" Jan 29 16:58:08 crc kubenswrapper[4746]: I0129 16:58:08.720278 4746 generic.go:334] "Generic (PLEG): container finished" podID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerID="8ccbc80a4f44bd3867a165ceb05bc702002c328fc89c8be74ef7c2b80a85893d" exitCode=143 Jan 29 16:58:08 crc kubenswrapper[4746]: I0129 16:58:08.720507 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e0a053c-6e7f-4c08-84ed-f1c908d76718","Type":"ContainerDied","Data":"8ccbc80a4f44bd3867a165ceb05bc702002c328fc89c8be74ef7c2b80a85893d"} Jan 29 16:58:08 crc kubenswrapper[4746]: I0129 16:58:08.912837 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:58:08 crc kubenswrapper[4746]: I0129 16:58:08.992102 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:09 crc kubenswrapper[4746]: I0129 16:58:09.018227 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:58:09 crc kubenswrapper[4746]: I0129 16:58:09.018273 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:58:12 crc kubenswrapper[4746]: I0129 16:58:12.762796 4746 generic.go:334] "Generic (PLEG): container finished" podID="dcbd102d-2909-4060-a027-5ebcc13063fb" containerID="5621af2d5539e35318ee2f2c35d249a21df7981eeee1f5046ada00d4056a1baa" exitCode=0 Jan 29 16:58:12 crc kubenswrapper[4746]: I0129 16:58:12.762886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwqs5" event={"ID":"dcbd102d-2909-4060-a027-5ebcc13063fb","Type":"ContainerDied","Data":"5621af2d5539e35318ee2f2c35d249a21df7981eeee1f5046ada00d4056a1baa"} Jan 29 16:58:13 crc kubenswrapper[4746]: I0129 16:58:13.913556 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:58:13 crc kubenswrapper[4746]: I0129 16:58:13.945628 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.164766 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.330326 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.353364 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-config-data\") pod \"dcbd102d-2909-4060-a027-5ebcc13063fb\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.353433 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.353468 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rszkq\" (UniqueName: \"kubernetes.io/projected/dcbd102d-2909-4060-a027-5ebcc13063fb-kube-api-access-rszkq\") pod \"dcbd102d-2909-4060-a027-5ebcc13063fb\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.353487 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.353588 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-scripts\") pod \"dcbd102d-2909-4060-a027-5ebcc13063fb\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.353633 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-combined-ca-bundle\") pod \"dcbd102d-2909-4060-a027-5ebcc13063fb\" (UID: \"dcbd102d-2909-4060-a027-5ebcc13063fb\") " Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.363107 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-scripts" (OuterVolumeSpecName: "scripts") pod "dcbd102d-2909-4060-a027-5ebcc13063fb" (UID: "dcbd102d-2909-4060-a027-5ebcc13063fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.377166 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbd102d-2909-4060-a027-5ebcc13063fb-kube-api-access-rszkq" (OuterVolumeSpecName: "kube-api-access-rszkq") pod "dcbd102d-2909-4060-a027-5ebcc13063fb" (UID: "dcbd102d-2909-4060-a027-5ebcc13063fb"). InnerVolumeSpecName "kube-api-access-rszkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.389382 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcbd102d-2909-4060-a027-5ebcc13063fb" (UID: "dcbd102d-2909-4060-a027-5ebcc13063fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.401031 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-config-data" (OuterVolumeSpecName: "config-data") pod "dcbd102d-2909-4060-a027-5ebcc13063fb" (UID: "dcbd102d-2909-4060-a027-5ebcc13063fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.433011 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-gch9n"] Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.433270 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerName="dnsmasq-dns" containerID="cri-o://9245156fcc29c2389db6bcbc8dc65879e7625c94c760439657b66307d38664d4" gracePeriod=10 Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.457796 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.457823 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rszkq\" (UniqueName: \"kubernetes.io/projected/dcbd102d-2909-4060-a027-5ebcc13063fb-kube-api-access-rszkq\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.457832 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.457841 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbd102d-2909-4060-a027-5ebcc13063fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.790019 4746 generic.go:334] "Generic (PLEG): container finished" podID="bf970538-d73e-48d7-b242-081dd3eacf7d" containerID="ae31072ec8addf54fcf59db27f019a8ca754b118d7d33ba75d44337ae689a8b2" exitCode=0 Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.790065 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q96jb" event={"ID":"bf970538-d73e-48d7-b242-081dd3eacf7d","Type":"ContainerDied","Data":"ae31072ec8addf54fcf59db27f019a8ca754b118d7d33ba75d44337ae689a8b2"} Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.796478 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerID="9245156fcc29c2389db6bcbc8dc65879e7625c94c760439657b66307d38664d4" exitCode=0 Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.796558 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" event={"ID":"6c220cf0-5a8e-40d6-8034-abd6fbe38228","Type":"ContainerDied","Data":"9245156fcc29c2389db6bcbc8dc65879e7625c94c760439657b66307d38664d4"} Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.807912 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rwqs5" event={"ID":"dcbd102d-2909-4060-a027-5ebcc13063fb","Type":"ContainerDied","Data":"4d2b67a6757bb18691a909d1de4692fff020df98ff9c6e776013c0e42f52e7d2"} Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.807954 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2b67a6757bb18691a909d1de4692fff020df98ff9c6e776013c0e42f52e7d2" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.808006 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rwqs5" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.852805 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:58:14 crc kubenswrapper[4746]: I0129 16:58:14.890723 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.070948 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-config\") pod \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.072935 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-swift-storage-0\") pod \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.073132 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-sb\") pod \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.073282 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-svc\") pod \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.073404 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-nb\") pod \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.073660 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg24m\" (UniqueName: \"kubernetes.io/projected/6c220cf0-5a8e-40d6-8034-abd6fbe38228-kube-api-access-tg24m\") pod \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\" (UID: \"6c220cf0-5a8e-40d6-8034-abd6fbe38228\") " Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.074372 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.074579 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-log" containerID="cri-o://4fffcf797a263576242c604f4d0715bd4c171d6e8ef1d66a7a116921443972a9" gracePeriod=30 Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.074947 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-api" containerID="cri-o://8dacc955812e306a9aaeb45e046d73221a37a83950672006244f5c7f01cd3ac6" gracePeriod=30 Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.087627 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": EOF" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.093058 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": EOF" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.135487 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c220cf0-5a8e-40d6-8034-abd6fbe38228-kube-api-access-tg24m" (OuterVolumeSpecName: "kube-api-access-tg24m") pod "6c220cf0-5a8e-40d6-8034-abd6fbe38228" (UID: "6c220cf0-5a8e-40d6-8034-abd6fbe38228"). InnerVolumeSpecName "kube-api-access-tg24m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.169655 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-config" (OuterVolumeSpecName: "config") pod "6c220cf0-5a8e-40d6-8034-abd6fbe38228" (UID: "6c220cf0-5a8e-40d6-8034-abd6fbe38228"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.173542 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c220cf0-5a8e-40d6-8034-abd6fbe38228" (UID: "6c220cf0-5a8e-40d6-8034-abd6fbe38228"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.175731 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg24m\" (UniqueName: \"kubernetes.io/projected/6c220cf0-5a8e-40d6-8034-abd6fbe38228-kube-api-access-tg24m\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.175770 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.175882 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.178665 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c220cf0-5a8e-40d6-8034-abd6fbe38228" (UID: "6c220cf0-5a8e-40d6-8034-abd6fbe38228"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.179917 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c220cf0-5a8e-40d6-8034-abd6fbe38228" (UID: "6c220cf0-5a8e-40d6-8034-abd6fbe38228"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.186680 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c220cf0-5a8e-40d6-8034-abd6fbe38228" (UID: "6c220cf0-5a8e-40d6-8034-abd6fbe38228"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.277559 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.277860 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.277870 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c220cf0-5a8e-40d6-8034-abd6fbe38228-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.299289 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.817480 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" event={"ID":"6c220cf0-5a8e-40d6-8034-abd6fbe38228","Type":"ContainerDied","Data":"5b054f9cafc8a6dd0d240724fa5447c27104345e5746c5666494e2a2621ba9ca"} Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.817544 4746 scope.go:117] "RemoveContainer" containerID="9245156fcc29c2389db6bcbc8dc65879e7625c94c760439657b66307d38664d4" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.817562 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-gch9n" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.820253 4746 generic.go:334] "Generic (PLEG): container finished" podID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerID="4fffcf797a263576242c604f4d0715bd4c171d6e8ef1d66a7a116921443972a9" exitCode=143 Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.820293 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73aab9e1-1ed2-4993-b6a5-514dce216afd","Type":"ContainerDied","Data":"4fffcf797a263576242c604f4d0715bd4c171d6e8ef1d66a7a116921443972a9"} Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.851018 4746 scope.go:117] "RemoveContainer" containerID="e11c46187c0c5efa4520c8c5ce645af356b307d4ec5100ee3f05955f5731aa9e" Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.871240 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-gch9n"] Jan 29 16:58:15 crc kubenswrapper[4746]: I0129 16:58:15.878897 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-gch9n"] Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.233486 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.399467 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-combined-ca-bundle\") pod \"bf970538-d73e-48d7-b242-081dd3eacf7d\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.399894 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-config-data\") pod \"bf970538-d73e-48d7-b242-081dd3eacf7d\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.400066 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-scripts\") pod \"bf970538-d73e-48d7-b242-081dd3eacf7d\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.400151 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvx5j\" (UniqueName: \"kubernetes.io/projected/bf970538-d73e-48d7-b242-081dd3eacf7d-kube-api-access-cvx5j\") pod \"bf970538-d73e-48d7-b242-081dd3eacf7d\" (UID: \"bf970538-d73e-48d7-b242-081dd3eacf7d\") " Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.404535 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf970538-d73e-48d7-b242-081dd3eacf7d-kube-api-access-cvx5j" (OuterVolumeSpecName: "kube-api-access-cvx5j") pod "bf970538-d73e-48d7-b242-081dd3eacf7d" (UID: "bf970538-d73e-48d7-b242-081dd3eacf7d"). InnerVolumeSpecName "kube-api-access-cvx5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.408022 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-scripts" (OuterVolumeSpecName: "scripts") pod "bf970538-d73e-48d7-b242-081dd3eacf7d" (UID: "bf970538-d73e-48d7-b242-081dd3eacf7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.426537 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf970538-d73e-48d7-b242-081dd3eacf7d" (UID: "bf970538-d73e-48d7-b242-081dd3eacf7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.429330 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-config-data" (OuterVolumeSpecName: "config-data") pod "bf970538-d73e-48d7-b242-081dd3eacf7d" (UID: "bf970538-d73e-48d7-b242-081dd3eacf7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.460606 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" path="/var/lib/kubelet/pods/6c220cf0-5a8e-40d6-8034-abd6fbe38228/volumes" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.502500 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.502536 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvx5j\" (UniqueName: \"kubernetes.io/projected/bf970538-d73e-48d7-b242-081dd3eacf7d-kube-api-access-cvx5j\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.502551 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.502562 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf970538-d73e-48d7-b242-081dd3eacf7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.831474 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q96jb" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.831455 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q96jb" event={"ID":"bf970538-d73e-48d7-b242-081dd3eacf7d","Type":"ContainerDied","Data":"647b7ba251cd8472f6441344387039389dd26d843ea45206c8a2ace77fbf95e3"} Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.831598 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647b7ba251cd8472f6441344387039389dd26d843ea45206c8a2ace77fbf95e3" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.833290 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" containerName="nova-scheduler-scheduler" containerID="cri-o://e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" gracePeriod=30 Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.890354 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:58:16 crc kubenswrapper[4746]: E0129 16:58:16.890852 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf970538-d73e-48d7-b242-081dd3eacf7d" containerName="nova-cell1-conductor-db-sync" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.890874 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf970538-d73e-48d7-b242-081dd3eacf7d" containerName="nova-cell1-conductor-db-sync" Jan 29 16:58:16 crc kubenswrapper[4746]: E0129 16:58:16.890885 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerName="init" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.890894 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerName="init" Jan 29 16:58:16 crc kubenswrapper[4746]: E0129 16:58:16.890911 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbd102d-2909-4060-a027-5ebcc13063fb" containerName="nova-manage" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.890919 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbd102d-2909-4060-a027-5ebcc13063fb" containerName="nova-manage" Jan 29 16:58:16 crc kubenswrapper[4746]: E0129 16:58:16.890972 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerName="dnsmasq-dns" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.890982 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerName="dnsmasq-dns" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.891211 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf970538-d73e-48d7-b242-081dd3eacf7d" containerName="nova-cell1-conductor-db-sync" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.891239 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbd102d-2909-4060-a027-5ebcc13063fb" containerName="nova-manage" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.891260 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c220cf0-5a8e-40d6-8034-abd6fbe38228" containerName="dnsmasq-dns" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.892171 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.899618 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:58:16 crc kubenswrapper[4746]: I0129 16:58:16.900068 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.011150 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2lp8\" (UniqueName: \"kubernetes.io/projected/b98c0c71-5d0c-48b2-a7d6-515a44ded344-kube-api-access-j2lp8\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.011249 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.011388 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.112442 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2lp8\" (UniqueName: \"kubernetes.io/projected/b98c0c71-5d0c-48b2-a7d6-515a44ded344-kube-api-access-j2lp8\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.112534 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.112626 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.116718 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.128602 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2lp8\" (UniqueName: \"kubernetes.io/projected/b98c0c71-5d0c-48b2-a7d6-515a44ded344-kube-api-access-j2lp8\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.131298 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.212568 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.639758 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.823616 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.849777 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b98c0c71-5d0c-48b2-a7d6-515a44ded344","Type":"ContainerStarted","Data":"c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6"} Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.849824 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b98c0c71-5d0c-48b2-a7d6-515a44ded344","Type":"ContainerStarted","Data":"a5f3c84cfbc0d4ea75bbf22cea92fba730c076c53b884dd9c1577ea98d4f9dd9"} Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.849961 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:17 crc kubenswrapper[4746]: I0129 16:58:17.892537 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.892503982 podStartE2EDuration="1.892503982s" podCreationTimestamp="2026-01-29 16:58:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:17.88039927 +0000 UTC m=+1420.280983914" watchObservedRunningTime="2026-01-29 16:58:17.892503982 +0000 UTC m=+1420.293088626" Jan 29 16:58:18 crc kubenswrapper[4746]: E0129 16:58:18.914013 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7 is running failed: container process not found" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:58:18 crc kubenswrapper[4746]: E0129 16:58:18.915049 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7 is running failed: container process not found" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:58:18 crc kubenswrapper[4746]: E0129 16:58:18.915612 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7 is running failed: container process not found" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:58:18 crc kubenswrapper[4746]: E0129 16:58:18.915672 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" containerName="nova-scheduler-scheduler" Jan 29 16:58:18 crc kubenswrapper[4746]: I0129 16:58:18.951841 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-42gjl"] Jan 29 16:58:18 crc kubenswrapper[4746]: I0129 16:58:18.953696 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:18 crc kubenswrapper[4746]: I0129 16:58:18.999799 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42gjl"] Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.054517 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfm5r\" (UniqueName: \"kubernetes.io/projected/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-kube-api-access-xfm5r\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.054611 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-catalog-content\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.055337 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-utilities\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.158575 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfm5r\" (UniqueName: \"kubernetes.io/projected/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-kube-api-access-xfm5r\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.158720 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-catalog-content\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.159526 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-catalog-content\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.159921 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-utilities\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.160325 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-utilities\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.189790 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfm5r\" (UniqueName: \"kubernetes.io/projected/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-kube-api-access-xfm5r\") pod \"redhat-operators-42gjl\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.323739 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.432172 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.565419 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjwjd\" (UniqueName: \"kubernetes.io/projected/8cd00935-6c5d-44a3-ba21-dd9ab842117f-kube-api-access-pjwjd\") pod \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.565631 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-combined-ca-bundle\") pod \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.565670 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-config-data\") pod \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\" (UID: \"8cd00935-6c5d-44a3-ba21-dd9ab842117f\") " Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.569200 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd00935-6c5d-44a3-ba21-dd9ab842117f-kube-api-access-pjwjd" (OuterVolumeSpecName: "kube-api-access-pjwjd") pod "8cd00935-6c5d-44a3-ba21-dd9ab842117f" (UID: "8cd00935-6c5d-44a3-ba21-dd9ab842117f"). InnerVolumeSpecName "kube-api-access-pjwjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.591498 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cd00935-6c5d-44a3-ba21-dd9ab842117f" (UID: "8cd00935-6c5d-44a3-ba21-dd9ab842117f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.605303 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-config-data" (OuterVolumeSpecName: "config-data") pod "8cd00935-6c5d-44a3-ba21-dd9ab842117f" (UID: "8cd00935-6c5d-44a3-ba21-dd9ab842117f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.667820 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjwjd\" (UniqueName: \"kubernetes.io/projected/8cd00935-6c5d-44a3-ba21-dd9ab842117f-kube-api-access-pjwjd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.667854 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.667863 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd00935-6c5d-44a3-ba21-dd9ab842117f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.786003 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42gjl"] Jan 29 16:58:19 crc kubenswrapper[4746]: W0129 16:58:19.790412 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b9ac52_08a9_4d7d_a46c_285ed708fcc6.slice/crio-5cb35447bca7d21888170beba3773c94e26f91cd524fe515cbaa4b5e254d51bb WatchSource:0}: Error finding container 5cb35447bca7d21888170beba3773c94e26f91cd524fe515cbaa4b5e254d51bb: Status 404 returned error can't find the container with id 5cb35447bca7d21888170beba3773c94e26f91cd524fe515cbaa4b5e254d51bb Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.881347 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerStarted","Data":"5cb35447bca7d21888170beba3773c94e26f91cd524fe515cbaa4b5e254d51bb"} Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.883348 4746 generic.go:334] "Generic (PLEG): container finished" podID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" exitCode=0 Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.883375 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cd00935-6c5d-44a3-ba21-dd9ab842117f","Type":"ContainerDied","Data":"e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7"} Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.883393 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cd00935-6c5d-44a3-ba21-dd9ab842117f","Type":"ContainerDied","Data":"5ba6195429d44fb8d6dc70d03aed6f530e63878f4c58aaf0857cc7683db3e641"} Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.883408 4746 scope.go:117] "RemoveContainer" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.883440 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.914806 4746 scope.go:117] "RemoveContainer" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" Jan 29 16:58:19 crc kubenswrapper[4746]: E0129 16:58:19.916733 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7\": container with ID starting with e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7 not found: ID does not exist" containerID="e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.916806 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7"} err="failed to get container status \"e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7\": rpc error: code = NotFound desc = could not find container \"e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7\": container with ID starting with e05a3108e0c3fdc8aeaf36801988047de450c4abf985dd83f155765ee3a48fa7 not found: ID does not exist" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.928881 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.944309 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.971296 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:19 crc kubenswrapper[4746]: E0129 16:58:19.972231 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" containerName="nova-scheduler-scheduler" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.972250 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" containerName="nova-scheduler-scheduler" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.972733 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" containerName="nova-scheduler-scheduler" Jan 29 16:58:19 crc kubenswrapper[4746]: I0129 16:58:19.976735 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.000767 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.043658 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.077550 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.077615 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9frx\" (UniqueName: \"kubernetes.io/projected/235c7742-ae7a-4603-b350-23ffb2c0e545-kube-api-access-l9frx\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.077646 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-config-data\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.179385 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9frx\" (UniqueName: \"kubernetes.io/projected/235c7742-ae7a-4603-b350-23ffb2c0e545-kube-api-access-l9frx\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.179795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-config-data\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.180844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.186420 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-config-data\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.186946 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.199378 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9frx\" (UniqueName: \"kubernetes.io/projected/235c7742-ae7a-4603-b350-23ffb2c0e545-kube-api-access-l9frx\") pod \"nova-scheduler-0\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.363076 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.461113 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd00935-6c5d-44a3-ba21-dd9ab842117f" path="/var/lib/kubelet/pods/8cd00935-6c5d-44a3-ba21-dd9ab842117f/volumes" Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.690096 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.920228 4746 generic.go:334] "Generic (PLEG): container finished" podID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerID="2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443" exitCode=0 Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.920460 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerDied","Data":"2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443"} Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.949042 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"235c7742-ae7a-4603-b350-23ffb2c0e545","Type":"ContainerStarted","Data":"97f429cadfe3a5765e940da2b8f14413872dc5263f48360062dd7fe9dc21006e"} Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.974855 4746 generic.go:334] "Generic (PLEG): container finished" podID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerID="8dacc955812e306a9aaeb45e046d73221a37a83950672006244f5c7f01cd3ac6" exitCode=0 Jan 29 16:58:20 crc kubenswrapper[4746]: I0129 16:58:20.974907 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73aab9e1-1ed2-4993-b6a5-514dce216afd","Type":"ContainerDied","Data":"8dacc955812e306a9aaeb45e046d73221a37a83950672006244f5c7f01cd3ac6"} Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.052251 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:21 crc kubenswrapper[4746]: E0129 16:58:21.072893 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:58:21 crc kubenswrapper[4746]: E0129 16:58:21.073039 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfm5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-42gjl_openshift-marketplace(c3b9ac52-08a9-4d7d-a46c-285ed708fcc6): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:21 crc kubenswrapper[4746]: E0129 16:58:21.074419 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-42gjl" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.209436 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-config-data\") pod \"73aab9e1-1ed2-4993-b6a5-514dce216afd\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.209478 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-combined-ca-bundle\") pod \"73aab9e1-1ed2-4993-b6a5-514dce216afd\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.209568 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aab9e1-1ed2-4993-b6a5-514dce216afd-logs\") pod \"73aab9e1-1ed2-4993-b6a5-514dce216afd\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.209619 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bm2p\" (UniqueName: \"kubernetes.io/projected/73aab9e1-1ed2-4993-b6a5-514dce216afd-kube-api-access-6bm2p\") pod \"73aab9e1-1ed2-4993-b6a5-514dce216afd\" (UID: \"73aab9e1-1ed2-4993-b6a5-514dce216afd\") " Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.210037 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73aab9e1-1ed2-4993-b6a5-514dce216afd-logs" (OuterVolumeSpecName: "logs") pod "73aab9e1-1ed2-4993-b6a5-514dce216afd" (UID: "73aab9e1-1ed2-4993-b6a5-514dce216afd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.214620 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73aab9e1-1ed2-4993-b6a5-514dce216afd-kube-api-access-6bm2p" (OuterVolumeSpecName: "kube-api-access-6bm2p") pod "73aab9e1-1ed2-4993-b6a5-514dce216afd" (UID: "73aab9e1-1ed2-4993-b6a5-514dce216afd"). InnerVolumeSpecName "kube-api-access-6bm2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.235723 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73aab9e1-1ed2-4993-b6a5-514dce216afd" (UID: "73aab9e1-1ed2-4993-b6a5-514dce216afd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.242324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-config-data" (OuterVolumeSpecName: "config-data") pod "73aab9e1-1ed2-4993-b6a5-514dce216afd" (UID: "73aab9e1-1ed2-4993-b6a5-514dce216afd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.311979 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.312246 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aab9e1-1ed2-4993-b6a5-514dce216afd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.312324 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aab9e1-1ed2-4993-b6a5-514dce216afd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.312380 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bm2p\" (UniqueName: \"kubernetes.io/projected/73aab9e1-1ed2-4993-b6a5-514dce216afd-kube-api-access-6bm2p\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.986200 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"235c7742-ae7a-4603-b350-23ffb2c0e545","Type":"ContainerStarted","Data":"03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4"} Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.988788 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"73aab9e1-1ed2-4993-b6a5-514dce216afd","Type":"ContainerDied","Data":"c50b0f0b63cb44ddd10a5866e6c7bd433e1e4e8f5a51585c0464f3908abd161b"} Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.988845 4746 scope.go:117] "RemoveContainer" containerID="8dacc955812e306a9aaeb45e046d73221a37a83950672006244f5c7f01cd3ac6" Jan 29 16:58:21 crc kubenswrapper[4746]: I0129 16:58:21.988804 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:21 crc kubenswrapper[4746]: E0129 16:58:21.990516 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-42gjl" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.010567 4746 scope.go:117] "RemoveContainer" containerID="4fffcf797a263576242c604f4d0715bd4c171d6e8ef1d66a7a116921443972a9" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.043687 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.043656308 podStartE2EDuration="3.043656308s" podCreationTimestamp="2026-01-29 16:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:22.005700817 +0000 UTC m=+1424.406285471" watchObservedRunningTime="2026-01-29 16:58:22.043656308 +0000 UTC m=+1424.444240962" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.065314 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.081160 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.095932 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:22 crc kubenswrapper[4746]: E0129 16:58:22.096437 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-log" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.096462 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-log" Jan 29 16:58:22 crc kubenswrapper[4746]: E0129 16:58:22.096497 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-api" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.096508 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-api" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.096735 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-api" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.096770 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" containerName="nova-api-log" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.097980 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.101398 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.111402 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.228506 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-config-data\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.228851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttd2g\" (UniqueName: \"kubernetes.io/projected/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-kube-api-access-ttd2g\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.228967 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-logs\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.229043 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.251315 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.332324 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttd2g\" (UniqueName: \"kubernetes.io/projected/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-kube-api-access-ttd2g\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.332425 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-logs\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.332967 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-logs\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.333098 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.333136 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-config-data\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.338836 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.338850 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-config-data\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.360177 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttd2g\" (UniqueName: \"kubernetes.io/projected/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-kube-api-access-ttd2g\") pod \"nova-api-0\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.417815 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.500957 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73aab9e1-1ed2-4993-b6a5-514dce216afd" path="/var/lib/kubelet/pods/73aab9e1-1ed2-4993-b6a5-514dce216afd/volumes" Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.626677 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.627131 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0439cd84-86aa-425b-91d1-5ab5a68e3210" containerName="kube-state-metrics" containerID="cri-o://ffc65679e605dec8adc14cf14da40b1627ad469087535fd49a93a48f43cd94dc" gracePeriod=30 Jan 29 16:58:22 crc kubenswrapper[4746]: I0129 16:58:22.992400 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:22 crc kubenswrapper[4746]: W0129 16:58:22.992480 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d0a46b8_9bd8_49e3_ba06_0cca834c2009.slice/crio-094122458c275415eb8ff6ff963ad0cbb6c0565a307b7ce39cf76ae885007f1b WatchSource:0}: Error finding container 094122458c275415eb8ff6ff963ad0cbb6c0565a307b7ce39cf76ae885007f1b: Status 404 returned error can't find the container with id 094122458c275415eb8ff6ff963ad0cbb6c0565a307b7ce39cf76ae885007f1b Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.013303 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d0a46b8-9bd8-49e3-ba06-0cca834c2009","Type":"ContainerStarted","Data":"094122458c275415eb8ff6ff963ad0cbb6c0565a307b7ce39cf76ae885007f1b"} Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.021581 4746 generic.go:334] "Generic (PLEG): container finished" podID="0439cd84-86aa-425b-91d1-5ab5a68e3210" containerID="ffc65679e605dec8adc14cf14da40b1627ad469087535fd49a93a48f43cd94dc" exitCode=2 Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.022449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0439cd84-86aa-425b-91d1-5ab5a68e3210","Type":"ContainerDied","Data":"ffc65679e605dec8adc14cf14da40b1627ad469087535fd49a93a48f43cd94dc"} Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.079453 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.153932 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2smzj\" (UniqueName: \"kubernetes.io/projected/0439cd84-86aa-425b-91d1-5ab5a68e3210-kube-api-access-2smzj\") pod \"0439cd84-86aa-425b-91d1-5ab5a68e3210\" (UID: \"0439cd84-86aa-425b-91d1-5ab5a68e3210\") " Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.205541 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0439cd84-86aa-425b-91d1-5ab5a68e3210-kube-api-access-2smzj" (OuterVolumeSpecName: "kube-api-access-2smzj") pod "0439cd84-86aa-425b-91d1-5ab5a68e3210" (UID: "0439cd84-86aa-425b-91d1-5ab5a68e3210"). InnerVolumeSpecName "kube-api-access-2smzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:23 crc kubenswrapper[4746]: I0129 16:58:23.258748 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2smzj\" (UniqueName: \"kubernetes.io/projected/0439cd84-86aa-425b-91d1-5ab5a68e3210-kube-api-access-2smzj\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.032689 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0439cd84-86aa-425b-91d1-5ab5a68e3210","Type":"ContainerDied","Data":"fea67c55f1bff5d4dc42a5916e535df159d5f73d1d2a43970e6149e07b525916"} Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.033055 4746 scope.go:117] "RemoveContainer" containerID="ffc65679e605dec8adc14cf14da40b1627ad469087535fd49a93a48f43cd94dc" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.032712 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.034736 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d0a46b8-9bd8-49e3-ba06-0cca834c2009","Type":"ContainerStarted","Data":"21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36"} Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.034762 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d0a46b8-9bd8-49e3-ba06-0cca834c2009","Type":"ContainerStarted","Data":"472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f"} Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.064039 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.064022338 podStartE2EDuration="2.064022338s" podCreationTimestamp="2026-01-29 16:58:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:24.062179717 +0000 UTC m=+1426.462764381" watchObservedRunningTime="2026-01-29 16:58:24.064022338 +0000 UTC m=+1426.464606982" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.085504 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.099051 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.110054 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:58:24 crc kubenswrapper[4746]: E0129 16:58:24.110677 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0439cd84-86aa-425b-91d1-5ab5a68e3210" containerName="kube-state-metrics" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.110702 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0439cd84-86aa-425b-91d1-5ab5a68e3210" containerName="kube-state-metrics" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.110951 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0439cd84-86aa-425b-91d1-5ab5a68e3210" containerName="kube-state-metrics" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.111747 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.115233 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.115458 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.119642 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.173274 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.173388 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x82d\" (UniqueName: \"kubernetes.io/projected/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-api-access-7x82d\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.173490 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.173587 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.275603 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x82d\" (UniqueName: \"kubernetes.io/projected/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-api-access-7x82d\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.275676 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.275715 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.275818 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.279691 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.279734 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.280592 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.289777 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x82d\" (UniqueName: \"kubernetes.io/projected/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-api-access-7x82d\") pod \"kube-state-metrics-0\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.429110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.468994 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0439cd84-86aa-425b-91d1-5ab5a68e3210" path="/var/lib/kubelet/pods/0439cd84-86aa-425b-91d1-5ab5a68e3210/volumes" Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.623126 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.625071 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-central-agent" containerID="cri-o://827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856" gracePeriod=30 Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.625723 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="proxy-httpd" containerID="cri-o://eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c" gracePeriod=30 Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.625884 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="sg-core" containerID="cri-o://218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315" gracePeriod=30 Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.625955 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-notification-agent" containerID="cri-o://c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0" gracePeriod=30 Jan 29 16:58:24 crc kubenswrapper[4746]: I0129 16:58:24.929310 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:58:25 crc kubenswrapper[4746]: I0129 16:58:25.047110 4746 generic.go:334] "Generic (PLEG): container finished" podID="12c7a413-1e2f-49b3-96b3-28755f853464" containerID="eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c" exitCode=0 Jan 29 16:58:25 crc kubenswrapper[4746]: I0129 16:58:25.048448 4746 generic.go:334] "Generic (PLEG): container finished" podID="12c7a413-1e2f-49b3-96b3-28755f853464" containerID="218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315" exitCode=2 Jan 29 16:58:25 crc kubenswrapper[4746]: I0129 16:58:25.047334 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerDied","Data":"eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c"} Jan 29 16:58:25 crc kubenswrapper[4746]: I0129 16:58:25.048697 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerDied","Data":"218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315"} Jan 29 16:58:25 crc kubenswrapper[4746]: I0129 16:58:25.051954 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f8f1a81-ca32-4335-be69-a9159ede91fa","Type":"ContainerStarted","Data":"62ca78d227bfa84aab7b5a6c9a7c08f59dbba90f42b773f8bc3e2e620e00c7c6"} Jan 29 16:58:25 crc kubenswrapper[4746]: I0129 16:58:25.363518 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.063364 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f8f1a81-ca32-4335-be69-a9159ede91fa","Type":"ContainerStarted","Data":"614c1528dcb502faea4895d6443017d7e52a267fbfb970de1158d60f296102fb"} Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.063738 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.067067 4746 generic.go:334] "Generic (PLEG): container finished" podID="12c7a413-1e2f-49b3-96b3-28755f853464" containerID="827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856" exitCode=0 Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.067132 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerDied","Data":"827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856"} Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.089086 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.732164198 podStartE2EDuration="2.089063596s" podCreationTimestamp="2026-01-29 16:58:24 +0000 UTC" firstStartedPulling="2026-01-29 16:58:24.940482174 +0000 UTC m=+1427.341066818" lastFinishedPulling="2026-01-29 16:58:25.297381572 +0000 UTC m=+1427.697966216" observedRunningTime="2026-01-29 16:58:26.082039533 +0000 UTC m=+1428.482624187" watchObservedRunningTime="2026-01-29 16:58:26.089063596 +0000 UTC m=+1428.489648250" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.694119 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.828874 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-scripts\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829211 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdfdx\" (UniqueName: \"kubernetes.io/projected/12c7a413-1e2f-49b3-96b3-28755f853464-kube-api-access-tdfdx\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829261 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-log-httpd\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829314 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-sg-core-conf-yaml\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-run-httpd\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829482 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-config-data\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829570 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-combined-ca-bundle\") pod \"12c7a413-1e2f-49b3-96b3-28755f853464\" (UID: \"12c7a413-1e2f-49b3-96b3-28755f853464\") " Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.829925 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.830125 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.834990 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c7a413-1e2f-49b3-96b3-28755f853464-kube-api-access-tdfdx" (OuterVolumeSpecName: "kube-api-access-tdfdx") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "kube-api-access-tdfdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.837624 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-scripts" (OuterVolumeSpecName: "scripts") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.860435 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.906641 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.931999 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.932034 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdfdx\" (UniqueName: \"kubernetes.io/projected/12c7a413-1e2f-49b3-96b3-28755f853464-kube-api-access-tdfdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.932044 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.932052 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.932060 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12c7a413-1e2f-49b3-96b3-28755f853464-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.932068 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4746]: I0129 16:58:26.941531 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-config-data" (OuterVolumeSpecName: "config-data") pod "12c7a413-1e2f-49b3-96b3-28755f853464" (UID: "12c7a413-1e2f-49b3-96b3-28755f853464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.033393 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12c7a413-1e2f-49b3-96b3-28755f853464-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.080314 4746 generic.go:334] "Generic (PLEG): container finished" podID="12c7a413-1e2f-49b3-96b3-28755f853464" containerID="c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0" exitCode=0 Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.080365 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerDied","Data":"c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0"} Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.080419 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"12c7a413-1e2f-49b3-96b3-28755f853464","Type":"ContainerDied","Data":"355cf0d49f3923481a5c02888768f7c7d51e4ecdc4f70012f6ec0f37201dafb0"} Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.080444 4746 scope.go:117] "RemoveContainer" containerID="eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.080461 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.110455 4746 scope.go:117] "RemoveContainer" containerID="218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.120145 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.131962 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.144834 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.145335 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-central-agent" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145356 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-central-agent" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.145371 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-notification-agent" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145379 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-notification-agent" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.145411 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="proxy-httpd" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145420 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="proxy-httpd" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.145432 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="sg-core" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145439 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="sg-core" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145651 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-notification-agent" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145664 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="proxy-httpd" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145686 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="sg-core" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.145700 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" containerName="ceilometer-central-agent" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.147469 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.148378 4746 scope.go:117] "RemoveContainer" containerID="c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.150796 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.151151 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.151381 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.170810 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.199490 4746 scope.go:117] "RemoveContainer" containerID="827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.234970 4746 scope.go:117] "RemoveContainer" containerID="eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.235612 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c\": container with ID starting with eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c not found: ID does not exist" containerID="eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.235669 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c"} err="failed to get container status \"eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c\": rpc error: code = NotFound desc = could not find container \"eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c\": container with ID starting with eb4fdc14f9adaeffca30a414edbf02c356946625e21d14f83bb379b64cca9f2c not found: ID does not exist" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.235702 4746 scope.go:117] "RemoveContainer" containerID="218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.235913 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236008 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-config-data\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236061 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-run-httpd\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236096 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7vk\" (UniqueName: \"kubernetes.io/projected/b98227cf-8738-4cda-be6b-0bab9d1dedbc-kube-api-access-ln7vk\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236184 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-log-httpd\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236307 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236490 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-scripts\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.236557 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.239733 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315\": container with ID starting with 218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315 not found: ID does not exist" containerID="218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.239795 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315"} err="failed to get container status \"218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315\": rpc error: code = NotFound desc = could not find container \"218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315\": container with ID starting with 218bf142165df123e43ede0f2d5647e871fd7713ef42e3e6461541cfd377b315 not found: ID does not exist" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.239841 4746 scope.go:117] "RemoveContainer" containerID="c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.241387 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0\": container with ID starting with c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0 not found: ID does not exist" containerID="c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.241426 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0"} err="failed to get container status \"c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0\": rpc error: code = NotFound desc = could not find container \"c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0\": container with ID starting with c443b1340bf2771c52f5d42ecf0467d76a7e5d9142adb6ca2944a893ac5fe1e0 not found: ID does not exist" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.241450 4746 scope.go:117] "RemoveContainer" containerID="827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856" Jan 29 16:58:27 crc kubenswrapper[4746]: E0129 16:58:27.242660 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856\": container with ID starting with 827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856 not found: ID does not exist" containerID="827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.242696 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856"} err="failed to get container status \"827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856\": rpc error: code = NotFound desc = could not find container \"827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856\": container with ID starting with 827623ffc71984453e0d8b5931605bccc2b3f33a32473b03cd38d455905cf856 not found: ID does not exist" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.338813 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.338946 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-config-data\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.338990 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-run-httpd\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339024 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7vk\" (UniqueName: \"kubernetes.io/projected/b98227cf-8738-4cda-be6b-0bab9d1dedbc-kube-api-access-ln7vk\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339055 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-log-httpd\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339120 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339267 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-scripts\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339322 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339686 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-log-httpd\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.339890 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-run-httpd\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.345293 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-scripts\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.345452 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.345684 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-config-data\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.347085 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.347276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.363311 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7vk\" (UniqueName: \"kubernetes.io/projected/b98227cf-8738-4cda-be6b-0bab9d1dedbc-kube-api-access-ln7vk\") pod \"ceilometer-0\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.473368 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:27 crc kubenswrapper[4746]: I0129 16:58:27.918021 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:28 crc kubenswrapper[4746]: I0129 16:58:28.090828 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerStarted","Data":"9ea47350eadf8128588e2c744e6b4780ec2a1faea1d2b905cbc244a453a7c871"} Jan 29 16:58:28 crc kubenswrapper[4746]: I0129 16:58:28.455303 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c7a413-1e2f-49b3-96b3-28755f853464" path="/var/lib/kubelet/pods/12c7a413-1e2f-49b3-96b3-28755f853464/volumes" Jan 29 16:58:29 crc kubenswrapper[4746]: I0129 16:58:29.114069 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerStarted","Data":"0e3695259a1a09ad3053780839e119f951dfb581abce18c9b11fbffe2eb201f5"} Jan 29 16:58:30 crc kubenswrapper[4746]: I0129 16:58:30.128061 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerStarted","Data":"37712c908cdce3ed92511d70ae0a0f5e7f59442b0bc0766b62b8445512ee3bf0"} Jan 29 16:58:30 crc kubenswrapper[4746]: I0129 16:58:30.363826 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:58:30 crc kubenswrapper[4746]: I0129 16:58:30.396105 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:58:30 crc kubenswrapper[4746]: E0129 16:58:30.612750 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:58:30 crc kubenswrapper[4746]: E0129 16:58:30.613135 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/tls.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/tls.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln7vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b98227cf-8738-4cda-be6b-0bab9d1dedbc): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:30 crc kubenswrapper[4746]: E0129 16:58:30.614496 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" Jan 29 16:58:31 crc kubenswrapper[4746]: I0129 16:58:31.162504 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerStarted","Data":"cfedcf5768911c36f98ea99964b98f495b189dc0673f213f07b4ba88580569d1"} Jan 29 16:58:31 crc kubenswrapper[4746]: E0129 16:58:31.164954 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" Jan 29 16:58:31 crc kubenswrapper[4746]: I0129 16:58:31.205122 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:58:32 crc kubenswrapper[4746]: E0129 16:58:32.173515 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" Jan 29 16:58:32 crc kubenswrapper[4746]: I0129 16:58:32.418807 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:58:32 crc kubenswrapper[4746]: I0129 16:58:32.418847 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:58:33 crc kubenswrapper[4746]: I0129 16:58:33.500361 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:33 crc kubenswrapper[4746]: I0129 16:58:33.500358 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:34 crc kubenswrapper[4746]: I0129 16:58:34.443958 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:58:37 crc kubenswrapper[4746]: E0129 16:58:37.608531 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:58:37 crc kubenswrapper[4746]: E0129 16:58:37.608915 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfm5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-42gjl_openshift-marketplace(c3b9ac52-08a9-4d7d-a46c-285ed708fcc6): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:37 crc kubenswrapper[4746]: E0129 16:58:37.610903 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-42gjl" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.173958 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.233099 4746 generic.go:334] "Generic (PLEG): container finished" podID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerID="4509b6065caa050a2798bb51537627795e31890f52e084af90edd17f47baad05" exitCode=137 Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.233174 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e0a053c-6e7f-4c08-84ed-f1c908d76718","Type":"ContainerDied","Data":"4509b6065caa050a2798bb51537627795e31890f52e084af90edd17f47baad05"} Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.233245 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e0a053c-6e7f-4c08-84ed-f1c908d76718","Type":"ContainerDied","Data":"4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a"} Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.233261 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.234980 4746 generic.go:334] "Generic (PLEG): container finished" podID="ca384131-3efa-43c4-b89c-006e62e467d0" containerID="c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577" exitCode=137 Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.235022 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca384131-3efa-43c4-b89c-006e62e467d0","Type":"ContainerDied","Data":"c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577"} Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.235047 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca384131-3efa-43c4-b89c-006e62e467d0","Type":"ContainerDied","Data":"1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565"} Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.235064 4746 scope.go:117] "RemoveContainer" containerID="c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.235246 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.277814 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-combined-ca-bundle\") pod \"ca384131-3efa-43c4-b89c-006e62e467d0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.277882 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8gr\" (UniqueName: \"kubernetes.io/projected/ca384131-3efa-43c4-b89c-006e62e467d0-kube-api-access-np8gr\") pod \"ca384131-3efa-43c4-b89c-006e62e467d0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.278057 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-config-data\") pod \"ca384131-3efa-43c4-b89c-006e62e467d0\" (UID: \"ca384131-3efa-43c4-b89c-006e62e467d0\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.282886 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca384131-3efa-43c4-b89c-006e62e467d0-kube-api-access-np8gr" (OuterVolumeSpecName: "kube-api-access-np8gr") pod "ca384131-3efa-43c4-b89c-006e62e467d0" (UID: "ca384131-3efa-43c4-b89c-006e62e467d0"). InnerVolumeSpecName "kube-api-access-np8gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.293170 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.296168 4746 scope.go:117] "RemoveContainer" containerID="c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577" Jan 29 16:58:38 crc kubenswrapper[4746]: E0129 16:58:38.296693 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577\": container with ID starting with c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577 not found: ID does not exist" containerID="c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.296748 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577"} err="failed to get container status \"c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577\": rpc error: code = NotFound desc = could not find container \"c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577\": container with ID starting with c3f43734a967a9db5dab7bd7d5b02067b0e2a0e52fac3e1bb42e648e08c76577 not found: ID does not exist" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.306660 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-config-data" (OuterVolumeSpecName: "config-data") pod "ca384131-3efa-43c4-b89c-006e62e467d0" (UID: "ca384131-3efa-43c4-b89c-006e62e467d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.313523 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca384131-3efa-43c4-b89c-006e62e467d0" (UID: "ca384131-3efa-43c4-b89c-006e62e467d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.379780 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-config-data\") pod \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.380057 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-combined-ca-bundle\") pod \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.380270 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0a053c-6e7f-4c08-84ed-f1c908d76718-logs\") pod \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.380686 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0a053c-6e7f-4c08-84ed-f1c908d76718-logs" (OuterVolumeSpecName: "logs") pod "1e0a053c-6e7f-4c08-84ed-f1c908d76718" (UID: "1e0a053c-6e7f-4c08-84ed-f1c908d76718"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.381006 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqn7j\" (UniqueName: \"kubernetes.io/projected/1e0a053c-6e7f-4c08-84ed-f1c908d76718-kube-api-access-rqn7j\") pod \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\" (UID: \"1e0a053c-6e7f-4c08-84ed-f1c908d76718\") " Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.381631 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.381753 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0a053c-6e7f-4c08-84ed-f1c908d76718-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.381841 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca384131-3efa-43c4-b89c-006e62e467d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.381917 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8gr\" (UniqueName: \"kubernetes.io/projected/ca384131-3efa-43c4-b89c-006e62e467d0-kube-api-access-np8gr\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.384835 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0a053c-6e7f-4c08-84ed-f1c908d76718-kube-api-access-rqn7j" (OuterVolumeSpecName: "kube-api-access-rqn7j") pod "1e0a053c-6e7f-4c08-84ed-f1c908d76718" (UID: "1e0a053c-6e7f-4c08-84ed-f1c908d76718"). InnerVolumeSpecName "kube-api-access-rqn7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.406754 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-config-data" (OuterVolumeSpecName: "config-data") pod "1e0a053c-6e7f-4c08-84ed-f1c908d76718" (UID: "1e0a053c-6e7f-4c08-84ed-f1c908d76718"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.408498 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e0a053c-6e7f-4c08-84ed-f1c908d76718" (UID: "1e0a053c-6e7f-4c08-84ed-f1c908d76718"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.483972 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.484002 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0a053c-6e7f-4c08-84ed-f1c908d76718-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.484013 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqn7j\" (UniqueName: \"kubernetes.io/projected/1e0a053c-6e7f-4c08-84ed-f1c908d76718-kube-api-access-rqn7j\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.561235 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.571105 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.584836 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:38 crc kubenswrapper[4746]: E0129 16:58:38.585641 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-metadata" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.585736 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-metadata" Jan 29 16:58:38 crc kubenswrapper[4746]: E0129 16:58:38.585838 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca384131-3efa-43c4-b89c-006e62e467d0" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.585904 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca384131-3efa-43c4-b89c-006e62e467d0" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:58:38 crc kubenswrapper[4746]: E0129 16:58:38.585967 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-log" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.586034 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-log" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.586589 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca384131-3efa-43c4-b89c-006e62e467d0" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.586687 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-log" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.586781 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" containerName="nova-metadata-metadata" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.587531 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.598093 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.598594 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.599119 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.603129 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.688720 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.688773 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.688795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.688858 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v228n\" (UniqueName: \"kubernetes.io/projected/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-kube-api-access-v228n\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.688887 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.790867 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.790935 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.790961 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.791001 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v228n\" (UniqueName: \"kubernetes.io/projected/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-kube-api-access-v228n\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.791026 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.795682 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.796311 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.796391 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.796746 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.810131 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v228n\" (UniqueName: \"kubernetes.io/projected/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-kube-api-access-v228n\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:38 crc kubenswrapper[4746]: I0129 16:58:38.910131 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.247798 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.274063 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.282091 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.302236 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.303685 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.305486 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.306795 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.327478 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.370450 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:39 crc kubenswrapper[4746]: W0129 16:58:39.372244 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c37ca07_37b6_4fa9_9f8f_eca8bd8eb3df.slice/crio-60ca2f05d0ce1b39248b519dec6ffa6cd71d00d643127d49b94250b92f519a7a WatchSource:0}: Error finding container 60ca2f05d0ce1b39248b519dec6ffa6cd71d00d643127d49b94250b92f519a7a: Status 404 returned error can't find the container with id 60ca2f05d0ce1b39248b519dec6ffa6cd71d00d643127d49b94250b92f519a7a Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.410536 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-config-data\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.410580 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5171220-fae1-41cf-9c83-6b02dec686bc-logs\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.410695 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvzd\" (UniqueName: \"kubernetes.io/projected/c5171220-fae1-41cf-9c83-6b02dec686bc-kube-api-access-psvzd\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.410885 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.411317 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.513089 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvzd\" (UniqueName: \"kubernetes.io/projected/c5171220-fae1-41cf-9c83-6b02dec686bc-kube-api-access-psvzd\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.513165 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.513231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.513287 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-config-data\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.513307 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5171220-fae1-41cf-9c83-6b02dec686bc-logs\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.513879 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5171220-fae1-41cf-9c83-6b02dec686bc-logs\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.517878 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.518821 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.519039 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-config-data\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.529363 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvzd\" (UniqueName: \"kubernetes.io/projected/c5171220-fae1-41cf-9c83-6b02dec686bc-kube-api-access-psvzd\") pod \"nova-metadata-0\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " pod="openstack/nova-metadata-0" Jan 29 16:58:39 crc kubenswrapper[4746]: I0129 16:58:39.632014 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:40 crc kubenswrapper[4746]: I0129 16:58:40.039822 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:40 crc kubenswrapper[4746]: I0129 16:58:40.259420 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df","Type":"ContainerStarted","Data":"c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d"} Jan 29 16:58:40 crc kubenswrapper[4746]: I0129 16:58:40.259501 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df","Type":"ContainerStarted","Data":"60ca2f05d0ce1b39248b519dec6ffa6cd71d00d643127d49b94250b92f519a7a"} Jan 29 16:58:40 crc kubenswrapper[4746]: I0129 16:58:40.260844 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5171220-fae1-41cf-9c83-6b02dec686bc","Type":"ContainerStarted","Data":"ea50ab4d467b36b877d933f320f737f8f0cfffb86f538b90d86384b1e04c3d9e"} Jan 29 16:58:40 crc kubenswrapper[4746]: I0129 16:58:40.456041 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0a053c-6e7f-4c08-84ed-f1c908d76718" path="/var/lib/kubelet/pods/1e0a053c-6e7f-4c08-84ed-f1c908d76718/volumes" Jan 29 16:58:40 crc kubenswrapper[4746]: I0129 16:58:40.457111 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca384131-3efa-43c4-b89c-006e62e467d0" path="/var/lib/kubelet/pods/ca384131-3efa-43c4-b89c-006e62e467d0/volumes" Jan 29 16:58:41 crc kubenswrapper[4746]: I0129 16:58:41.270098 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5171220-fae1-41cf-9c83-6b02dec686bc","Type":"ContainerStarted","Data":"5d105cc2dc5db03d9d4314d9618988ab0d96e28e8c2b9f889998c93baf2e7b67"} Jan 29 16:58:41 crc kubenswrapper[4746]: I0129 16:58:41.270148 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5171220-fae1-41cf-9c83-6b02dec686bc","Type":"ContainerStarted","Data":"82848e4e7ae0bac815a1a9d99a0cd0a4cc3305a64a1316f9cca739ff4dd7764c"} Jan 29 16:58:41 crc kubenswrapper[4746]: I0129 16:58:41.289954 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.289932754 podStartE2EDuration="2.289932754s" podCreationTimestamp="2026-01-29 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:41.285627366 +0000 UTC m=+1443.686212010" watchObservedRunningTime="2026-01-29 16:58:41.289932754 +0000 UTC m=+1443.690517408" Jan 29 16:58:41 crc kubenswrapper[4746]: I0129 16:58:41.296641 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.296622937 podStartE2EDuration="3.296622937s" podCreationTimestamp="2026-01-29 16:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:40.286852829 +0000 UTC m=+1442.687437523" watchObservedRunningTime="2026-01-29 16:58:41.296622937 +0000 UTC m=+1443.697207591" Jan 29 16:58:41 crc kubenswrapper[4746]: E0129 16:58:41.539119 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565\": RecentStats: unable to find data in memory cache]" Jan 29 16:58:42 crc kubenswrapper[4746]: I0129 16:58:42.421699 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:58:42 crc kubenswrapper[4746]: I0129 16:58:42.422597 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:58:42 crc kubenswrapper[4746]: I0129 16:58:42.422733 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:58:42 crc kubenswrapper[4746]: I0129 16:58:42.429053 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.293153 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.299376 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.478037 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-2bgxn"] Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.484036 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.502282 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-2bgxn"] Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.589565 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.589623 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmnd\" (UniqueName: \"kubernetes.io/projected/d1205318-995d-4d3f-8c94-4faab5e1e48a-kube-api-access-cxmnd\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.589691 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.589722 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.589807 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-svc\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.589893 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-config\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.691269 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxmnd\" (UniqueName: \"kubernetes.io/projected/d1205318-995d-4d3f-8c94-4faab5e1e48a-kube-api-access-cxmnd\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.691355 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.691378 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.691444 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-svc\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.691499 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-config\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.691548 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.692351 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.692417 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.692486 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-svc\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.692507 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.692592 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-config\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.723570 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxmnd\" (UniqueName: \"kubernetes.io/projected/d1205318-995d-4d3f-8c94-4faab5e1e48a-kube-api-access-cxmnd\") pod \"dnsmasq-dns-5ddd577785-2bgxn\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.809955 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:43 crc kubenswrapper[4746]: I0129 16:58:43.912343 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:44 crc kubenswrapper[4746]: I0129 16:58:44.285724 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-2bgxn"] Jan 29 16:58:44 crc kubenswrapper[4746]: I0129 16:58:44.302991 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" event={"ID":"d1205318-995d-4d3f-8c94-4faab5e1e48a","Type":"ContainerStarted","Data":"e06fbd5a7ae2e801c15c65c4eb3c5043b1813a02a3ce53ba997e0417b6330ee5"} Jan 29 16:58:44 crc kubenswrapper[4746]: E0129 16:58:44.587846 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:58:44 crc kubenswrapper[4746]: E0129 16:58:44.588085 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/tls.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/tls.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln7vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b98227cf-8738-4cda-be6b-0bab9d1dedbc): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:44 crc kubenswrapper[4746]: E0129 16:58:44.589276 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" Jan 29 16:58:44 crc kubenswrapper[4746]: I0129 16:58:44.632696 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:58:44 crc kubenswrapper[4746]: I0129 16:58:44.633701 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:58:45 crc kubenswrapper[4746]: I0129 16:58:45.226239 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:45 crc kubenswrapper[4746]: I0129 16:58:45.313313 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerID="144102022a25ffa5da982493e8ec0cbb1eba8a52241bd3edb3be370715f9d1f1" exitCode=0 Jan 29 16:58:45 crc kubenswrapper[4746]: I0129 16:58:45.313356 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" event={"ID":"d1205318-995d-4d3f-8c94-4faab5e1e48a","Type":"ContainerDied","Data":"144102022a25ffa5da982493e8ec0cbb1eba8a52241bd3edb3be370715f9d1f1"} Jan 29 16:58:45 crc kubenswrapper[4746]: I0129 16:58:45.314166 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-central-agent" containerID="cri-o://0e3695259a1a09ad3053780839e119f951dfb581abce18c9b11fbffe2eb201f5" gracePeriod=30 Jan 29 16:58:45 crc kubenswrapper[4746]: I0129 16:58:45.314308 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-notification-agent" containerID="cri-o://37712c908cdce3ed92511d70ae0a0f5e7f59442b0bc0766b62b8445512ee3bf0" gracePeriod=30 Jan 29 16:58:45 crc kubenswrapper[4746]: I0129 16:58:45.314381 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="sg-core" containerID="cri-o://cfedcf5768911c36f98ea99964b98f495b189dc0673f213f07b4ba88580569d1" gracePeriod=30 Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.209285 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.323731 4746 generic.go:334] "Generic (PLEG): container finished" podID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerID="cfedcf5768911c36f98ea99964b98f495b189dc0673f213f07b4ba88580569d1" exitCode=2 Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.323765 4746 generic.go:334] "Generic (PLEG): container finished" podID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerID="0e3695259a1a09ad3053780839e119f951dfb581abce18c9b11fbffe2eb201f5" exitCode=0 Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.323815 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerDied","Data":"cfedcf5768911c36f98ea99964b98f495b189dc0673f213f07b4ba88580569d1"} Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.323843 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerDied","Data":"0e3695259a1a09ad3053780839e119f951dfb581abce18c9b11fbffe2eb201f5"} Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.325784 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" event={"ID":"d1205318-995d-4d3f-8c94-4faab5e1e48a","Type":"ContainerStarted","Data":"cd7bdfeaf74aad9a30010243d284207e43e919190849f1ae0003a99378ba6011"} Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.325871 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-log" containerID="cri-o://472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f" gracePeriod=30 Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.325973 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-api" containerID="cri-o://21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36" gracePeriod=30 Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.326316 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:46 crc kubenswrapper[4746]: I0129 16:58:46.360718 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" podStartSLOduration=3.360702518 podStartE2EDuration="3.360702518s" podCreationTimestamp="2026-01-29 16:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:46.356269376 +0000 UTC m=+1448.756854020" watchObservedRunningTime="2026-01-29 16:58:46.360702518 +0000 UTC m=+1448.761287172" Jan 29 16:58:47 crc kubenswrapper[4746]: I0129 16:58:47.335754 4746 generic.go:334] "Generic (PLEG): container finished" podID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerID="472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f" exitCode=143 Jan 29 16:58:47 crc kubenswrapper[4746]: I0129 16:58:47.335852 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d0a46b8-9bd8-49e3-ba06-0cca834c2009","Type":"ContainerDied","Data":"472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f"} Jan 29 16:58:48 crc kubenswrapper[4746]: I0129 16:58:48.911046 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:48 crc kubenswrapper[4746]: I0129 16:58:48.930656 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.065012 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.065081 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.367271 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.554411 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n5h5w"] Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.560897 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.569484 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.570272 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.583939 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n5h5w"] Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.634684 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.653570 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.717622 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-config-data\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.717821 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nr65\" (UniqueName: \"kubernetes.io/projected/b6375d75-bb3d-4f1e-a5d7-3474f937d241-kube-api-access-8nr65\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.717882 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.717918 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-scripts\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.819520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.819837 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-scripts\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.819875 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-config-data\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.819978 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nr65\" (UniqueName: \"kubernetes.io/projected/b6375d75-bb3d-4f1e-a5d7-3474f937d241-kube-api-access-8nr65\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.828120 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-config-data\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.828271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.838031 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-scripts\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.855006 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nr65\" (UniqueName: \"kubernetes.io/projected/b6375d75-bb3d-4f1e-a5d7-3474f937d241-kube-api-access-8nr65\") pod \"nova-cell1-cell-mapping-n5h5w\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:49 crc kubenswrapper[4746]: I0129 16:58:49.893434 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.040288 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.131787 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-combined-ca-bundle\") pod \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.131982 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-logs\") pod \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.132016 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-config-data\") pod \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.132045 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttd2g\" (UniqueName: \"kubernetes.io/projected/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-kube-api-access-ttd2g\") pod \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\" (UID: \"3d0a46b8-9bd8-49e3-ba06-0cca834c2009\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.132795 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-logs" (OuterVolumeSpecName: "logs") pod "3d0a46b8-9bd8-49e3-ba06-0cca834c2009" (UID: "3d0a46b8-9bd8-49e3-ba06-0cca834c2009"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.138368 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-kube-api-access-ttd2g" (OuterVolumeSpecName: "kube-api-access-ttd2g") pod "3d0a46b8-9bd8-49e3-ba06-0cca834c2009" (UID: "3d0a46b8-9bd8-49e3-ba06-0cca834c2009"). InnerVolumeSpecName "kube-api-access-ttd2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.176350 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d0a46b8-9bd8-49e3-ba06-0cca834c2009" (UID: "3d0a46b8-9bd8-49e3-ba06-0cca834c2009"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.194342 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-config-data" (OuterVolumeSpecName: "config-data") pod "3d0a46b8-9bd8-49e3-ba06-0cca834c2009" (UID: "3d0a46b8-9bd8-49e3-ba06-0cca834c2009"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.234000 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.234033 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.234042 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.234050 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttd2g\" (UniqueName: \"kubernetes.io/projected/3d0a46b8-9bd8-49e3-ba06-0cca834c2009-kube-api-access-ttd2g\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.373472 4746 generic.go:334] "Generic (PLEG): container finished" podID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerID="37712c908cdce3ed92511d70ae0a0f5e7f59442b0bc0766b62b8445512ee3bf0" exitCode=0 Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.373574 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerDied","Data":"37712c908cdce3ed92511d70ae0a0f5e7f59442b0bc0766b62b8445512ee3bf0"} Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.375908 4746 generic.go:334] "Generic (PLEG): container finished" podID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerID="21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36" exitCode=0 Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.375985 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.376030 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d0a46b8-9bd8-49e3-ba06-0cca834c2009","Type":"ContainerDied","Data":"21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36"} Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.376061 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3d0a46b8-9bd8-49e3-ba06-0cca834c2009","Type":"ContainerDied","Data":"094122458c275415eb8ff6ff963ad0cbb6c0565a307b7ce39cf76ae885007f1b"} Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.376081 4746 scope.go:117] "RemoveContainer" containerID="21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.382846 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n5h5w"] Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.420309 4746 scope.go:117] "RemoveContainer" containerID="472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.444505 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.493576 4746 scope.go:117] "RemoveContainer" containerID="21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.494638 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.495276 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.495392 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.495495 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36\": container with ID starting with 21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36 not found: ID does not exist" containerID="21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.495535 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36"} err="failed to get container status \"21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36\": rpc error: code = NotFound desc = could not find container \"21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36\": container with ID starting with 21293e792eb4b65d31f8e592f892cf968942677e6c03a9d2700f29f027372f36 not found: ID does not exist" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.495565 4746 scope.go:117] "RemoveContainer" containerID="472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f" Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.496090 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-central-agent" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496111 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-central-agent" Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.496131 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-api" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496139 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-api" Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.496155 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-notification-agent" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496163 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-notification-agent" Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.496239 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="sg-core" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496249 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="sg-core" Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.496261 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-log" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496268 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-log" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496792 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-api" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496843 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" containerName="nova-api-log" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496861 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="sg-core" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496885 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-central-agent" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.496930 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" containerName="ceilometer-notification-agent" Jan 29 16:58:50 crc kubenswrapper[4746]: E0129 16:58:50.498064 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f\": container with ID starting with 472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f not found: ID does not exist" containerID="472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.498102 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f"} err="failed to get container status \"472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f\": rpc error: code = NotFound desc = could not find container \"472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f\": container with ID starting with 472c4c74a3f55667f1edc7925394f9cc380b79ef392b48583108d900ae26831f not found: ID does not exist" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.498437 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.498535 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.504335 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.504517 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.504674 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.650478 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-ceilometer-tls-certs\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.650804 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-log-httpd\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.650839 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-config-data\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.650876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-combined-ca-bundle\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.650909 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-run-httpd\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.650964 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-sg-core-conf-yaml\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651013 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-scripts\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651085 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln7vk\" (UniqueName: \"kubernetes.io/projected/b98227cf-8738-4cda-be6b-0bab9d1dedbc-kube-api-access-ln7vk\") pod \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\" (UID: \"b98227cf-8738-4cda-be6b-0bab9d1dedbc\") " Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651447 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d9f4359-140b-47f6-9972-f46051f1ef66-logs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-config-data\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651623 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651740 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9js6\" (UniqueName: \"kubernetes.io/projected/6d9f4359-140b-47f6-9972-f46051f1ef66-kube-api-access-b9js6\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.651794 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-public-tls-certs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.652854 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.653433 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.658643 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98227cf-8738-4cda-be6b-0bab9d1dedbc-kube-api-access-ln7vk" (OuterVolumeSpecName: "kube-api-access-ln7vk") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "kube-api-access-ln7vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.664473 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-scripts" (OuterVolumeSpecName: "scripts") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.665116 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.682418 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.682484 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.706519 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.732378 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.737574 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-config-data" (OuterVolumeSpecName: "config-data") pod "b98227cf-8738-4cda-be6b-0bab9d1dedbc" (UID: "b98227cf-8738-4cda-be6b-0bab9d1dedbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-config-data\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754484 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9js6\" (UniqueName: \"kubernetes.io/projected/6d9f4359-140b-47f6-9972-f46051f1ef66-kube-api-access-b9js6\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754622 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-public-tls-certs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754752 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d9f4359-140b-47f6-9972-f46051f1ef66-logs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754812 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754828 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754839 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754851 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b98227cf-8738-4cda-be6b-0bab9d1dedbc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754863 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754873 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754884 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln7vk\" (UniqueName: \"kubernetes.io/projected/b98227cf-8738-4cda-be6b-0bab9d1dedbc-kube-api-access-ln7vk\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.754899 4746 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b98227cf-8738-4cda-be6b-0bab9d1dedbc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.755299 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d9f4359-140b-47f6-9972-f46051f1ef66-logs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.758581 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.759221 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.759615 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-public-tls-certs\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.761494 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-config-data\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.776110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9js6\" (UniqueName: \"kubernetes.io/projected/6d9f4359-140b-47f6-9972-f46051f1ef66-kube-api-access-b9js6\") pod \"nova-api-0\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " pod="openstack/nova-api-0" Jan 29 16:58:50 crc kubenswrapper[4746]: I0129 16:58:50.943658 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.390040 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b98227cf-8738-4cda-be6b-0bab9d1dedbc","Type":"ContainerDied","Data":"9ea47350eadf8128588e2c744e6b4780ec2a1faea1d2b905cbc244a453a7c871"} Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.390366 4746 scope.go:117] "RemoveContainer" containerID="cfedcf5768911c36f98ea99964b98f495b189dc0673f213f07b4ba88580569d1" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.390070 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.397234 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n5h5w" event={"ID":"b6375d75-bb3d-4f1e-a5d7-3474f937d241","Type":"ContainerStarted","Data":"1198023c41c80e6dc1d51dd8e6370ba603f52d4acbb49ed3121e75fbb0054834"} Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.397274 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n5h5w" event={"ID":"b6375d75-bb3d-4f1e-a5d7-3474f937d241","Type":"ContainerStarted","Data":"8c450fb1160e38822d73eddbc63c7493c286faadafef08d9164c5c1953779350"} Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.424139 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.430305 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n5h5w" podStartSLOduration=2.43028205 podStartE2EDuration="2.43028205s" podCreationTimestamp="2026-01-29 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:51.413851569 +0000 UTC m=+1453.814436213" watchObservedRunningTime="2026-01-29 16:58:51.43028205 +0000 UTC m=+1453.830866694" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.434677 4746 scope.go:117] "RemoveContainer" containerID="37712c908cdce3ed92511d70ae0a0f5e7f59442b0bc0766b62b8445512ee3bf0" Jan 29 16:58:51 crc kubenswrapper[4746]: W0129 16:58:51.435648 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d9f4359_140b_47f6_9972_f46051f1ef66.slice/crio-ea03ae7dc6850ae07bca10edc5fef695a8e5f5959fa7f8a539c55b353bd670ce WatchSource:0}: Error finding container ea03ae7dc6850ae07bca10edc5fef695a8e5f5959fa7f8a539c55b353bd670ce: Status 404 returned error can't find the container with id ea03ae7dc6850ae07bca10edc5fef695a8e5f5959fa7f8a539c55b353bd670ce Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.481832 4746 scope.go:117] "RemoveContainer" containerID="0e3695259a1a09ad3053780839e119f951dfb581abce18c9b11fbffe2eb201f5" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.488055 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.522601 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.542731 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.546807 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.552011 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.552686 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.552902 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.553038 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607024 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607145 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwl7\" (UniqueName: \"kubernetes.io/projected/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-kube-api-access-9lwl7\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607258 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607343 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-log-httpd\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607422 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-scripts\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607487 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-run-httpd\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.607549 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-config-data\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709351 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-config-data\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709439 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709487 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwl7\" (UniqueName: \"kubernetes.io/projected/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-kube-api-access-9lwl7\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709542 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709588 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709608 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-log-httpd\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709637 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-scripts\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.709657 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-run-httpd\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.710047 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-run-httpd\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.710149 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-log-httpd\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.720646 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-scripts\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.724660 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.724809 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-config-data\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.732124 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.735925 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwl7\" (UniqueName: \"kubernetes.io/projected/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-kube-api-access-9lwl7\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.738441 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " pod="openstack/ceilometer-0" Jan 29 16:58:51 crc kubenswrapper[4746]: E0129 16:58:51.810681 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a\": RecentStats: unable to find data in memory cache]" Jan 29 16:58:51 crc kubenswrapper[4746]: I0129 16:58:51.943854 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.414543 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d9f4359-140b-47f6-9972-f46051f1ef66","Type":"ContainerStarted","Data":"251256bf270ffd973c96aa7a15e03c83a8ff7f08d8957b70bc50e486b3704b33"} Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.414613 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d9f4359-140b-47f6-9972-f46051f1ef66","Type":"ContainerStarted","Data":"94ce968b9d7a00cb46c241ba8c17fcee0f7c83319fcee1b1ffe7340d86ae9625"} Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.414635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d9f4359-140b-47f6-9972-f46051f1ef66","Type":"ContainerStarted","Data":"ea03ae7dc6850ae07bca10edc5fef695a8e5f5959fa7f8a539c55b353bd670ce"} Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.427997 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:52 crc kubenswrapper[4746]: W0129 16:58:52.430293 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8aad3209_fb2f_42b9_8fc3_6c3bf4ac0a90.slice/crio-cca101ed6256bd0aa70fc88711464664181324285715d949907a3f96a1808385 WatchSource:0}: Error finding container cca101ed6256bd0aa70fc88711464664181324285715d949907a3f96a1808385: Status 404 returned error can't find the container with id cca101ed6256bd0aa70fc88711464664181324285715d949907a3f96a1808385 Jan 29 16:58:52 crc kubenswrapper[4746]: E0129 16:58:52.448919 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-42gjl" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.466454 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0a46b8-9bd8-49e3-ba06-0cca834c2009" path="/var/lib/kubelet/pods/3d0a46b8-9bd8-49e3-ba06-0cca834c2009/volumes" Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.467527 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98227cf-8738-4cda-be6b-0bab9d1dedbc" path="/var/lib/kubelet/pods/b98227cf-8738-4cda-be6b-0bab9d1dedbc/volumes" Jan 29 16:58:52 crc kubenswrapper[4746]: I0129 16:58:52.491857 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.491832587 podStartE2EDuration="2.491832587s" podCreationTimestamp="2026-01-29 16:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:52.438792544 +0000 UTC m=+1454.839377198" watchObservedRunningTime="2026-01-29 16:58:52.491832587 +0000 UTC m=+1454.892417231" Jan 29 16:58:53 crc kubenswrapper[4746]: I0129 16:58:53.425672 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerStarted","Data":"b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82"} Jan 29 16:58:53 crc kubenswrapper[4746]: I0129 16:58:53.426353 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerStarted","Data":"cca101ed6256bd0aa70fc88711464664181324285715d949907a3f96a1808385"} Jan 29 16:58:53 crc kubenswrapper[4746]: I0129 16:58:53.812133 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:58:53 crc kubenswrapper[4746]: I0129 16:58:53.918023 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8f7h7"] Jan 29 16:58:53 crc kubenswrapper[4746]: I0129 16:58:53.920386 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="dnsmasq-dns" containerID="cri-o://a6ca57c1b1427d4152c2d3d29d17abec1ff2930930f94171eb6a4832a28e0ff4" gracePeriod=10 Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.442940 4746 generic.go:334] "Generic (PLEG): container finished" podID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerID="a6ca57c1b1427d4152c2d3d29d17abec1ff2930930f94171eb6a4832a28e0ff4" exitCode=0 Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.443474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" event={"ID":"ced428a8-c8f4-4de1-89b7-965b4360f35d","Type":"ContainerDied","Data":"a6ca57c1b1427d4152c2d3d29d17abec1ff2930930f94171eb6a4832a28e0ff4"} Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.443573 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" event={"ID":"ced428a8-c8f4-4de1-89b7-965b4360f35d","Type":"ContainerDied","Data":"ba7ee0f54d4d62d1a02cbf0750046c154f58a8c8fa91b2ffc4635578e58938e9"} Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.443628 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7ee0f54d4d62d1a02cbf0750046c154f58a8c8fa91b2ffc4635578e58938e9" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.457425 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerStarted","Data":"304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de"} Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.477388 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.567468 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n4kd\" (UniqueName: \"kubernetes.io/projected/ced428a8-c8f4-4de1-89b7-965b4360f35d-kube-api-access-6n4kd\") pod \"ced428a8-c8f4-4de1-89b7-965b4360f35d\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.567514 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-swift-storage-0\") pod \"ced428a8-c8f4-4de1-89b7-965b4360f35d\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.567566 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-nb\") pod \"ced428a8-c8f4-4de1-89b7-965b4360f35d\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.567592 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-config\") pod \"ced428a8-c8f4-4de1-89b7-965b4360f35d\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.567660 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-svc\") pod \"ced428a8-c8f4-4de1-89b7-965b4360f35d\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.567731 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-sb\") pod \"ced428a8-c8f4-4de1-89b7-965b4360f35d\" (UID: \"ced428a8-c8f4-4de1-89b7-965b4360f35d\") " Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.573588 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced428a8-c8f4-4de1-89b7-965b4360f35d-kube-api-access-6n4kd" (OuterVolumeSpecName: "kube-api-access-6n4kd") pod "ced428a8-c8f4-4de1-89b7-965b4360f35d" (UID: "ced428a8-c8f4-4de1-89b7-965b4360f35d"). InnerVolumeSpecName "kube-api-access-6n4kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.620133 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-config" (OuterVolumeSpecName: "config") pod "ced428a8-c8f4-4de1-89b7-965b4360f35d" (UID: "ced428a8-c8f4-4de1-89b7-965b4360f35d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.621998 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ced428a8-c8f4-4de1-89b7-965b4360f35d" (UID: "ced428a8-c8f4-4de1-89b7-965b4360f35d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.627700 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ced428a8-c8f4-4de1-89b7-965b4360f35d" (UID: "ced428a8-c8f4-4de1-89b7-965b4360f35d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.650210 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ced428a8-c8f4-4de1-89b7-965b4360f35d" (UID: "ced428a8-c8f4-4de1-89b7-965b4360f35d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.651544 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ced428a8-c8f4-4de1-89b7-965b4360f35d" (UID: "ced428a8-c8f4-4de1-89b7-965b4360f35d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.669751 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.669785 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.669802 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n4kd\" (UniqueName: \"kubernetes.io/projected/ced428a8-c8f4-4de1-89b7-965b4360f35d-kube-api-access-6n4kd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.669814 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.669841 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:54.669854 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced428a8-c8f4-4de1-89b7-965b4360f35d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:55.460948 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:55.495696 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8f7h7"] Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:55.504524 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8f7h7"] Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:56.458591 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" path="/var/lib/kubelet/pods/ced428a8-c8f4-4de1-89b7-965b4360f35d/volumes" Jan 29 16:58:56 crc kubenswrapper[4746]: I0129 16:58:56.473094 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerStarted","Data":"4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb"} Jan 29 16:58:57 crc kubenswrapper[4746]: I0129 16:58:57.486926 4746 generic.go:334] "Generic (PLEG): container finished" podID="b6375d75-bb3d-4f1e-a5d7-3474f937d241" containerID="1198023c41c80e6dc1d51dd8e6370ba603f52d4acbb49ed3121e75fbb0054834" exitCode=0 Jan 29 16:58:57 crc kubenswrapper[4746]: I0129 16:58:57.487091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n5h5w" event={"ID":"b6375d75-bb3d-4f1e-a5d7-3474f937d241","Type":"ContainerDied","Data":"1198023c41c80e6dc1d51dd8e6370ba603f52d4acbb49ed3121e75fbb0054834"} Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.834342 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.950748 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nr65\" (UniqueName: \"kubernetes.io/projected/b6375d75-bb3d-4f1e-a5d7-3474f937d241-kube-api-access-8nr65\") pod \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.950960 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-config-data\") pod \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.951054 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-scripts\") pod \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.951078 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-combined-ca-bundle\") pod \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\" (UID: \"b6375d75-bb3d-4f1e-a5d7-3474f937d241\") " Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.956695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-scripts" (OuterVolumeSpecName: "scripts") pod "b6375d75-bb3d-4f1e-a5d7-3474f937d241" (UID: "b6375d75-bb3d-4f1e-a5d7-3474f937d241"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.956967 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6375d75-bb3d-4f1e-a5d7-3474f937d241-kube-api-access-8nr65" (OuterVolumeSpecName: "kube-api-access-8nr65") pod "b6375d75-bb3d-4f1e-a5d7-3474f937d241" (UID: "b6375d75-bb3d-4f1e-a5d7-3474f937d241"). InnerVolumeSpecName "kube-api-access-8nr65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.978638 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6375d75-bb3d-4f1e-a5d7-3474f937d241" (UID: "b6375d75-bb3d-4f1e-a5d7-3474f937d241"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:58 crc kubenswrapper[4746]: I0129 16:58:58.980135 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-config-data" (OuterVolumeSpecName: "config-data") pod "b6375d75-bb3d-4f1e-a5d7-3474f937d241" (UID: "b6375d75-bb3d-4f1e-a5d7-3474f937d241"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.054525 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.054571 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.054582 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nr65\" (UniqueName: \"kubernetes.io/projected/b6375d75-bb3d-4f1e-a5d7-3474f937d241-kube-api-access-8nr65\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.054591 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6375d75-bb3d-4f1e-a5d7-3474f937d241-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.343487 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-557bbc7df7-8f7h7" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: i/o timeout" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.503292 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n5h5w" event={"ID":"b6375d75-bb3d-4f1e-a5d7-3474f937d241","Type":"ContainerDied","Data":"8c450fb1160e38822d73eddbc63c7493c286faadafef08d9164c5c1953779350"} Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.503338 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c450fb1160e38822d73eddbc63c7493c286faadafef08d9164c5c1953779350" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.503396 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n5h5w" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.668455 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.717555 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.718314 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.750724 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.750999 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-log" containerID="cri-o://94ce968b9d7a00cb46c241ba8c17fcee0f7c83319fcee1b1ffe7340d86ae9625" gracePeriod=30 Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.751147 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-api" containerID="cri-o://251256bf270ffd973c96aa7a15e03c83a8ff7f08d8957b70bc50e486b3704b33" gracePeriod=30 Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.790817 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.791096 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" containerID="cri-o://03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" gracePeriod=30 Jan 29 16:58:59 crc kubenswrapper[4746]: I0129 16:58:59.832829 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:00 crc kubenswrapper[4746]: E0129 16:59:00.366518 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:00 crc kubenswrapper[4746]: E0129 16:59:00.374539 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:00 crc kubenswrapper[4746]: E0129 16:59:00.376695 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:00 crc kubenswrapper[4746]: E0129 16:59:00.376769 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" Jan 29 16:59:00 crc kubenswrapper[4746]: I0129 16:59:00.512094 4746 generic.go:334] "Generic (PLEG): container finished" podID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerID="251256bf270ffd973c96aa7a15e03c83a8ff7f08d8957b70bc50e486b3704b33" exitCode=0 Jan 29 16:59:00 crc kubenswrapper[4746]: I0129 16:59:00.512133 4746 generic.go:334] "Generic (PLEG): container finished" podID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerID="94ce968b9d7a00cb46c241ba8c17fcee0f7c83319fcee1b1ffe7340d86ae9625" exitCode=143 Jan 29 16:59:00 crc kubenswrapper[4746]: I0129 16:59:00.512266 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d9f4359-140b-47f6-9972-f46051f1ef66","Type":"ContainerDied","Data":"251256bf270ffd973c96aa7a15e03c83a8ff7f08d8957b70bc50e486b3704b33"} Jan 29 16:59:00 crc kubenswrapper[4746]: I0129 16:59:00.512301 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d9f4359-140b-47f6-9972-f46051f1ef66","Type":"ContainerDied","Data":"94ce968b9d7a00cb46c241ba8c17fcee0f7c83319fcee1b1ffe7340d86ae9625"} Jan 29 16:59:00 crc kubenswrapper[4746]: I0129 16:59:00.518400 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.092259 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.199649 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d9f4359-140b-47f6-9972-f46051f1ef66-logs\") pod \"6d9f4359-140b-47f6-9972-f46051f1ef66\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200040 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d9f4359-140b-47f6-9972-f46051f1ef66-logs" (OuterVolumeSpecName: "logs") pod "6d9f4359-140b-47f6-9972-f46051f1ef66" (UID: "6d9f4359-140b-47f6-9972-f46051f1ef66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200076 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-public-tls-certs\") pod \"6d9f4359-140b-47f6-9972-f46051f1ef66\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-internal-tls-certs\") pod \"6d9f4359-140b-47f6-9972-f46051f1ef66\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200160 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-config-data\") pod \"6d9f4359-140b-47f6-9972-f46051f1ef66\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200222 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9js6\" (UniqueName: \"kubernetes.io/projected/6d9f4359-140b-47f6-9972-f46051f1ef66-kube-api-access-b9js6\") pod \"6d9f4359-140b-47f6-9972-f46051f1ef66\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200394 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-combined-ca-bundle\") pod \"6d9f4359-140b-47f6-9972-f46051f1ef66\" (UID: \"6d9f4359-140b-47f6-9972-f46051f1ef66\") " Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.200994 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d9f4359-140b-47f6-9972-f46051f1ef66-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.208513 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d9f4359-140b-47f6-9972-f46051f1ef66-kube-api-access-b9js6" (OuterVolumeSpecName: "kube-api-access-b9js6") pod "6d9f4359-140b-47f6-9972-f46051f1ef66" (UID: "6d9f4359-140b-47f6-9972-f46051f1ef66"). InnerVolumeSpecName "kube-api-access-b9js6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.231689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-config-data" (OuterVolumeSpecName: "config-data") pod "6d9f4359-140b-47f6-9972-f46051f1ef66" (UID: "6d9f4359-140b-47f6-9972-f46051f1ef66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.233697 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d9f4359-140b-47f6-9972-f46051f1ef66" (UID: "6d9f4359-140b-47f6-9972-f46051f1ef66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.259975 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6d9f4359-140b-47f6-9972-f46051f1ef66" (UID: "6d9f4359-140b-47f6-9972-f46051f1ef66"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.277788 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6d9f4359-140b-47f6-9972-f46051f1ef66" (UID: "6d9f4359-140b-47f6-9972-f46051f1ef66"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.302636 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.302675 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.302687 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.302700 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d9f4359-140b-47f6-9972-f46051f1ef66-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.302712 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9js6\" (UniqueName: \"kubernetes.io/projected/6d9f4359-140b-47f6-9972-f46051f1ef66-kube-api-access-b9js6\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.523314 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" containerID="cri-o://5d105cc2dc5db03d9d4314d9618988ab0d96e28e8c2b9f889998c93baf2e7b67" gracePeriod=30 Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.523684 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.523829 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d9f4359-140b-47f6-9972-f46051f1ef66","Type":"ContainerDied","Data":"ea03ae7dc6850ae07bca10edc5fef695a8e5f5959fa7f8a539c55b353bd670ce"} Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.523882 4746 scope.go:117] "RemoveContainer" containerID="251256bf270ffd973c96aa7a15e03c83a8ff7f08d8957b70bc50e486b3704b33" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.524085 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" containerID="cri-o://82848e4e7ae0bac815a1a9d99a0cd0a4cc3305a64a1316f9cca739ff4dd7764c" gracePeriod=30 Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.544828 4746 scope.go:117] "RemoveContainer" containerID="94ce968b9d7a00cb46c241ba8c17fcee0f7c83319fcee1b1ffe7340d86ae9625" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.572446 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.584325 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.599707 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:01 crc kubenswrapper[4746]: E0129 16:59:01.600150 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="dnsmasq-dns" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600175 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="dnsmasq-dns" Jan 29 16:59:01 crc kubenswrapper[4746]: E0129 16:59:01.600209 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="init" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600219 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="init" Jan 29 16:59:01 crc kubenswrapper[4746]: E0129 16:59:01.600250 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6375d75-bb3d-4f1e-a5d7-3474f937d241" containerName="nova-manage" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600259 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6375d75-bb3d-4f1e-a5d7-3474f937d241" containerName="nova-manage" Jan 29 16:59:01 crc kubenswrapper[4746]: E0129 16:59:01.600284 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-log" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600293 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-log" Jan 29 16:59:01 crc kubenswrapper[4746]: E0129 16:59:01.600304 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-api" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600312 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-api" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600521 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced428a8-c8f4-4de1-89b7-965b4360f35d" containerName="dnsmasq-dns" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600556 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-api" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600570 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6375d75-bb3d-4f1e-a5d7-3474f937d241" containerName="nova-manage" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.600586 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" containerName="nova-api-log" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.601728 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.603952 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.604331 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.618612 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.623915 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.713261 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbdab6-8b6e-4199-808c-be07e373df64-logs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.713541 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.713629 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrb8\" (UniqueName: \"kubernetes.io/projected/4cfbdab6-8b6e-4199-808c-be07e373df64-kube-api-access-zqrb8\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.713712 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-config-data\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.713798 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.713961 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-public-tls-certs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.815577 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.815840 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-public-tls-certs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.815951 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbdab6-8b6e-4199-808c-be07e373df64-logs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.816032 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.816127 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrb8\" (UniqueName: \"kubernetes.io/projected/4cfbdab6-8b6e-4199-808c-be07e373df64-kube-api-access-zqrb8\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.816263 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-config-data\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.817404 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbdab6-8b6e-4199-808c-be07e373df64-logs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.824912 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-public-tls-certs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.824969 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.825152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-config-data\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.825240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.836970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrb8\" (UniqueName: \"kubernetes.io/projected/4cfbdab6-8b6e-4199-808c-be07e373df64-kube-api-access-zqrb8\") pod \"nova-api-0\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " pod="openstack/nova-api-0" Jan 29 16:59:01 crc kubenswrapper[4746]: I0129 16:59:01.935066 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:59:02 crc kubenswrapper[4746]: E0129 16:59:02.048487 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a\": RecentStats: unable to find data in memory cache]" Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.364862 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:02 crc kubenswrapper[4746]: W0129 16:59:02.367946 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cfbdab6_8b6e_4199_808c_be07e373df64.slice/crio-b386eedd126608892d8c0b9d82de86afde17c10844e725e83a51e28b70fabec9 WatchSource:0}: Error finding container b386eedd126608892d8c0b9d82de86afde17c10844e725e83a51e28b70fabec9: Status 404 returned error can't find the container with id b386eedd126608892d8c0b9d82de86afde17c10844e725e83a51e28b70fabec9 Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.466647 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d9f4359-140b-47f6-9972-f46051f1ef66" path="/var/lib/kubelet/pods/6d9f4359-140b-47f6-9972-f46051f1ef66/volumes" Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.537925 4746 generic.go:334] "Generic (PLEG): container finished" podID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerID="5d105cc2dc5db03d9d4314d9618988ab0d96e28e8c2b9f889998c93baf2e7b67" exitCode=143 Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.538008 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5171220-fae1-41cf-9c83-6b02dec686bc","Type":"ContainerDied","Data":"5d105cc2dc5db03d9d4314d9618988ab0d96e28e8c2b9f889998c93baf2e7b67"} Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.541820 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerStarted","Data":"66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124"} Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.541956 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.546846 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cfbdab6-8b6e-4199-808c-be07e373df64","Type":"ContainerStarted","Data":"b386eedd126608892d8c0b9d82de86afde17c10844e725e83a51e28b70fabec9"} Jan 29 16:59:02 crc kubenswrapper[4746]: I0129 16:59:02.580693 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.755393099 podStartE2EDuration="11.580673001s" podCreationTimestamp="2026-01-29 16:58:51 +0000 UTC" firstStartedPulling="2026-01-29 16:58:52.432230763 +0000 UTC m=+1454.832815407" lastFinishedPulling="2026-01-29 16:59:01.257510665 +0000 UTC m=+1463.658095309" observedRunningTime="2026-01-29 16:59:02.575135819 +0000 UTC m=+1464.975720463" watchObservedRunningTime="2026-01-29 16:59:02.580673001 +0000 UTC m=+1464.981257645" Jan 29 16:59:03 crc kubenswrapper[4746]: I0129 16:59:03.559638 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cfbdab6-8b6e-4199-808c-be07e373df64","Type":"ContainerStarted","Data":"90d0f7c0ec8bee68f1032e1115bb3957e1cc29de95dedf8075f362d0b3ca5802"} Jan 29 16:59:03 crc kubenswrapper[4746]: I0129 16:59:03.559975 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cfbdab6-8b6e-4199-808c-be07e373df64","Type":"ContainerStarted","Data":"f5dbc0994f4e33f3d35d508e2ee9e277a69d60f776de81a42fb9ff89c6a2d705"} Jan 29 16:59:03 crc kubenswrapper[4746]: I0129 16:59:03.589072 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.589048811 podStartE2EDuration="2.589048811s" podCreationTimestamp="2026-01-29 16:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:59:03.582813811 +0000 UTC m=+1465.983398455" watchObservedRunningTime="2026-01-29 16:59:03.589048811 +0000 UTC m=+1465.989633455" Jan 29 16:59:04 crc kubenswrapper[4746]: I0129 16:59:04.647512 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:57212->10.217.0.204:8775: read: connection reset by peer" Jan 29 16:59:04 crc kubenswrapper[4746]: I0129 16:59:04.647583 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:57210->10.217.0.204:8775: read: connection reset by peer" Jan 29 16:59:05 crc kubenswrapper[4746]: E0129 16:59:05.365439 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:05 crc kubenswrapper[4746]: E0129 16:59:05.367092 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:05 crc kubenswrapper[4746]: E0129 16:59:05.368180 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:05 crc kubenswrapper[4746]: E0129 16:59:05.368249 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" Jan 29 16:59:08 crc kubenswrapper[4746]: I0129 16:59:08.599979 4746 generic.go:334] "Generic (PLEG): container finished" podID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerID="82848e4e7ae0bac815a1a9d99a0cd0a4cc3305a64a1316f9cca739ff4dd7764c" exitCode=0 Jan 29 16:59:08 crc kubenswrapper[4746]: I0129 16:59:08.600051 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5171220-fae1-41cf-9c83-6b02dec686bc","Type":"ContainerDied","Data":"82848e4e7ae0bac815a1a9d99a0cd0a4cc3305a64a1316f9cca739ff4dd7764c"} Jan 29 16:59:09 crc kubenswrapper[4746]: I0129 16:59:09.633648 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": dial tcp 10.217.0.204:8775: connect: connection refused" Jan 29 16:59:09 crc kubenswrapper[4746]: I0129 16:59:09.634338 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": dial tcp 10.217.0.204:8775: connect: connection refused" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.284338 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: E0129 16:59:10.364150 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4 is running failed: container process not found" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:10 crc kubenswrapper[4746]: E0129 16:59:10.364772 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4 is running failed: container process not found" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:10 crc kubenswrapper[4746]: E0129 16:59:10.365283 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4 is running failed: container process not found" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:10 crc kubenswrapper[4746]: E0129 16:59:10.365320 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.367965 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-config-data\") pod \"c5171220-fae1-41cf-9c83-6b02dec686bc\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.369017 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psvzd\" (UniqueName: \"kubernetes.io/projected/c5171220-fae1-41cf-9c83-6b02dec686bc-kube-api-access-psvzd\") pod \"c5171220-fae1-41cf-9c83-6b02dec686bc\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.369051 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-nova-metadata-tls-certs\") pod \"c5171220-fae1-41cf-9c83-6b02dec686bc\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.369083 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-combined-ca-bundle\") pod \"c5171220-fae1-41cf-9c83-6b02dec686bc\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.369151 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5171220-fae1-41cf-9c83-6b02dec686bc-logs\") pod \"c5171220-fae1-41cf-9c83-6b02dec686bc\" (UID: \"c5171220-fae1-41cf-9c83-6b02dec686bc\") " Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.370338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5171220-fae1-41cf-9c83-6b02dec686bc-logs" (OuterVolumeSpecName: "logs") pod "c5171220-fae1-41cf-9c83-6b02dec686bc" (UID: "c5171220-fae1-41cf-9c83-6b02dec686bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.390751 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5171220-fae1-41cf-9c83-6b02dec686bc-kube-api-access-psvzd" (OuterVolumeSpecName: "kube-api-access-psvzd") pod "c5171220-fae1-41cf-9c83-6b02dec686bc" (UID: "c5171220-fae1-41cf-9c83-6b02dec686bc"). InnerVolumeSpecName "kube-api-access-psvzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.397045 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-config-data" (OuterVolumeSpecName: "config-data") pod "c5171220-fae1-41cf-9c83-6b02dec686bc" (UID: "c5171220-fae1-41cf-9c83-6b02dec686bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.407637 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5171220-fae1-41cf-9c83-6b02dec686bc" (UID: "c5171220-fae1-41cf-9c83-6b02dec686bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.430507 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c5171220-fae1-41cf-9c83-6b02dec686bc" (UID: "c5171220-fae1-41cf-9c83-6b02dec686bc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.471162 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.471242 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psvzd\" (UniqueName: \"kubernetes.io/projected/c5171220-fae1-41cf-9c83-6b02dec686bc-kube-api-access-psvzd\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.471254 4746 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.471264 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5171220-fae1-41cf-9c83-6b02dec686bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.471272 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5171220-fae1-41cf-9c83-6b02dec686bc-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.619341 4746 generic.go:334] "Generic (PLEG): container finished" podID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" exitCode=0 Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.619408 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"235c7742-ae7a-4603-b350-23ffb2c0e545","Type":"ContainerDied","Data":"03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4"} Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.621261 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerStarted","Data":"064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e"} Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.623800 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c5171220-fae1-41cf-9c83-6b02dec686bc","Type":"ContainerDied","Data":"ea50ab4d467b36b877d933f320f737f8f0cfffb86f538b90d86384b1e04c3d9e"} Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.623830 4746 scope.go:117] "RemoveContainer" containerID="82848e4e7ae0bac815a1a9d99a0cd0a4cc3305a64a1316f9cca739ff4dd7764c" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.623878 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.676478 4746 scope.go:117] "RemoveContainer" containerID="5d105cc2dc5db03d9d4314d9618988ab0d96e28e8c2b9f889998c93baf2e7b67" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.680557 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.708404 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.757248 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:10 crc kubenswrapper[4746]: E0129 16:59:10.757672 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.757689 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" Jan 29 16:59:10 crc kubenswrapper[4746]: E0129 16:59:10.757703 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.757711 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.757931 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-log" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.757952 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" containerName="nova-metadata-metadata" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.758862 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.763531 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.763685 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.768632 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.878479 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-config-data\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.878866 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.879083 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9gw\" (UniqueName: \"kubernetes.io/projected/abd6dc02-1269-43b8-a1aa-d239875e4902-kube-api-access-gw9gw\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.879168 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd6dc02-1269-43b8-a1aa-d239875e4902-logs\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.879215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.980421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9gw\" (UniqueName: \"kubernetes.io/projected/abd6dc02-1269-43b8-a1aa-d239875e4902-kube-api-access-gw9gw\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.980474 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd6dc02-1269-43b8-a1aa-d239875e4902-logs\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.980491 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.980568 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-config-data\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.980596 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.980941 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd6dc02-1269-43b8-a1aa-d239875e4902-logs\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.986702 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.987157 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-config-data\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:10 crc kubenswrapper[4746]: I0129 16:59:10.991289 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:10.999973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9gw\" (UniqueName: \"kubernetes.io/projected/abd6dc02-1269-43b8-a1aa-d239875e4902-kube-api-access-gw9gw\") pod \"nova-metadata-0\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " pod="openstack/nova-metadata-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.053333 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.085413 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.183178 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9frx\" (UniqueName: \"kubernetes.io/projected/235c7742-ae7a-4603-b350-23ffb2c0e545-kube-api-access-l9frx\") pod \"235c7742-ae7a-4603-b350-23ffb2c0e545\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.183664 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-config-data\") pod \"235c7742-ae7a-4603-b350-23ffb2c0e545\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.183852 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-combined-ca-bundle\") pod \"235c7742-ae7a-4603-b350-23ffb2c0e545\" (UID: \"235c7742-ae7a-4603-b350-23ffb2c0e545\") " Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.190631 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235c7742-ae7a-4603-b350-23ffb2c0e545-kube-api-access-l9frx" (OuterVolumeSpecName: "kube-api-access-l9frx") pod "235c7742-ae7a-4603-b350-23ffb2c0e545" (UID: "235c7742-ae7a-4603-b350-23ffb2c0e545"). InnerVolumeSpecName "kube-api-access-l9frx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.218706 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "235c7742-ae7a-4603-b350-23ffb2c0e545" (UID: "235c7742-ae7a-4603-b350-23ffb2c0e545"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.227113 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-config-data" (OuterVolumeSpecName: "config-data") pod "235c7742-ae7a-4603-b350-23ffb2c0e545" (UID: "235c7742-ae7a-4603-b350-23ffb2c0e545"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.285431 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.285471 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9frx\" (UniqueName: \"kubernetes.io/projected/235c7742-ae7a-4603-b350-23ffb2c0e545-kube-api-access-l9frx\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.285482 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/235c7742-ae7a-4603-b350-23ffb2c0e545-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.535835 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.633333 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"235c7742-ae7a-4603-b350-23ffb2c0e545","Type":"ContainerDied","Data":"97f429cadfe3a5765e940da2b8f14413872dc5263f48360062dd7fe9dc21006e"} Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.633618 4746 scope.go:117] "RemoveContainer" containerID="03f178b12961b375b69f12cfa94800ec1f9fe71aa1cd922b2f2dba5e2bb377e4" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.633428 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.635041 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abd6dc02-1269-43b8-a1aa-d239875e4902","Type":"ContainerStarted","Data":"0d727a66ef596a29971b4cbef16439a48124e1c54cea78b233d81747f12d9aa1"} Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.640084 4746 generic.go:334] "Generic (PLEG): container finished" podID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerID="064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e" exitCode=0 Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.640137 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerDied","Data":"064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e"} Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.691588 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.702554 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.717111 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:11 crc kubenswrapper[4746]: E0129 16:59:11.717703 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.717734 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.717993 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" containerName="nova-scheduler-scheduler" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.719465 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.722659 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.733375 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.794398 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-config-data\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.794472 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.794493 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p44v\" (UniqueName: \"kubernetes.io/projected/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-kube-api-access-5p44v\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.896039 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-config-data\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.896136 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.896215 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p44v\" (UniqueName: \"kubernetes.io/projected/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-kube-api-access-5p44v\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.900235 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.900239 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-config-data\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.915954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p44v\" (UniqueName: \"kubernetes.io/projected/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-kube-api-access-5p44v\") pod \"nova-scheduler-0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " pod="openstack/nova-scheduler-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.936031 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:59:11 crc kubenswrapper[4746]: I0129 16:59:11.936069 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.049749 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:59:12 crc kubenswrapper[4746]: E0129 16:59:12.303149 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a\": RecentStats: unable to find data in memory cache]" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.466974 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235c7742-ae7a-4603-b350-23ffb2c0e545" path="/var/lib/kubelet/pods/235c7742-ae7a-4603-b350-23ffb2c0e545/volumes" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.468174 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5171220-fae1-41cf-9c83-6b02dec686bc" path="/var/lib/kubelet/pods/c5171220-fae1-41cf-9c83-6b02dec686bc/volumes" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.519869 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.663645 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abd6dc02-1269-43b8-a1aa-d239875e4902","Type":"ContainerStarted","Data":"34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5"} Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.664367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abd6dc02-1269-43b8-a1aa-d239875e4902","Type":"ContainerStarted","Data":"9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55"} Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.671224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerStarted","Data":"7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521"} Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.673426 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0","Type":"ContainerStarted","Data":"6dbc5f0d9de373c9e7d441dc013fc5d8642b85a90c3d3a29cc15b679dc6d9ffa"} Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.698779 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.698754346 podStartE2EDuration="2.698754346s" podCreationTimestamp="2026-01-29 16:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:59:12.684754642 +0000 UTC m=+1475.085339296" watchObservedRunningTime="2026-01-29 16:59:12.698754346 +0000 UTC m=+1475.099339010" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.713466 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-42gjl" podStartSLOduration=3.363176422 podStartE2EDuration="54.713442188s" podCreationTimestamp="2026-01-29 16:58:18 +0000 UTC" firstStartedPulling="2026-01-29 16:58:20.924105361 +0000 UTC m=+1423.324690005" lastFinishedPulling="2026-01-29 16:59:12.274371127 +0000 UTC m=+1474.674955771" observedRunningTime="2026-01-29 16:59:12.701704837 +0000 UTC m=+1475.102289481" watchObservedRunningTime="2026-01-29 16:59:12.713442188 +0000 UTC m=+1475.114026832" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.951408 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:59:12 crc kubenswrapper[4746]: I0129 16:59:12.951649 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:59:13 crc kubenswrapper[4746]: I0129 16:59:13.684258 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0","Type":"ContainerStarted","Data":"905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e"} Jan 29 16:59:13 crc kubenswrapper[4746]: I0129 16:59:13.709566 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.709547063 podStartE2EDuration="2.709547063s" podCreationTimestamp="2026-01-29 16:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:59:13.703593389 +0000 UTC m=+1476.104178043" watchObservedRunningTime="2026-01-29 16:59:13.709547063 +0000 UTC m=+1476.110131717" Jan 29 16:59:16 crc kubenswrapper[4746]: I0129 16:59:16.086721 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:59:16 crc kubenswrapper[4746]: I0129 16:59:16.087058 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:59:17 crc kubenswrapper[4746]: I0129 16:59:17.050200 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:59:19 crc kubenswrapper[4746]: I0129 16:59:19.065404 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:59:19 crc kubenswrapper[4746]: I0129 16:59:19.065761 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:59:19 crc kubenswrapper[4746]: I0129 16:59:19.324685 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:59:19 crc kubenswrapper[4746]: I0129 16:59:19.324739 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:59:19 crc kubenswrapper[4746]: I0129 16:59:19.370389 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:59:19 crc kubenswrapper[4746]: I0129 16:59:19.779120 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:59:20 crc kubenswrapper[4746]: I0129 16:59:20.175878 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42gjl"] Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.086919 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.087288 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.751747 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-42gjl" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="registry-server" containerID="cri-o://7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521" gracePeriod=2 Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.945371 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.945894 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.962489 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.962796 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:59:21 crc kubenswrapper[4746]: I0129 16:59:21.963047 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.050236 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.082976 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.105391 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.105421 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.212918 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.297821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfm5r\" (UniqueName: \"kubernetes.io/projected/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-kube-api-access-xfm5r\") pod \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.297985 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-catalog-content\") pod \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.298090 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-utilities\") pod \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\" (UID: \"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6\") " Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.299367 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-utilities" (OuterVolumeSpecName: "utilities") pod "c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" (UID: "c3b9ac52-08a9-4d7d-a46c-285ed708fcc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.304362 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-kube-api-access-xfm5r" (OuterVolumeSpecName: "kube-api-access-xfm5r") pod "c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" (UID: "c3b9ac52-08a9-4d7d-a46c-285ed708fcc6"). InnerVolumeSpecName "kube-api-access-xfm5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.399653 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.399682 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfm5r\" (UniqueName: \"kubernetes.io/projected/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-kube-api-access-xfm5r\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.427503 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" (UID: "c3b9ac52-08a9-4d7d-a46c-285ed708fcc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.501757 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:22 crc kubenswrapper[4746]: E0129 16:59:22.576574 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b9ac52_08a9_4d7d_a46c_285ed708fcc6.slice/crio-5cb35447bca7d21888170beba3773c94e26f91cd524fe515cbaa4b5e254d51bb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565\": RecentStats: unable to find data in memory cache]" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.761796 4746 generic.go:334] "Generic (PLEG): container finished" podID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerID="7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521" exitCode=0 Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.761870 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42gjl" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.761869 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerDied","Data":"7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521"} Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.762274 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42gjl" event={"ID":"c3b9ac52-08a9-4d7d-a46c-285ed708fcc6","Type":"ContainerDied","Data":"5cb35447bca7d21888170beba3773c94e26f91cd524fe515cbaa4b5e254d51bb"} Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.762301 4746 scope.go:117] "RemoveContainer" containerID="7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.763644 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.773404 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.792951 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42gjl"] Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.798170 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.802303 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-42gjl"] Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.815714 4746 scope.go:117] "RemoveContainer" containerID="064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.855395 4746 scope.go:117] "RemoveContainer" containerID="2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.916366 4746 scope.go:117] "RemoveContainer" containerID="7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521" Jan 29 16:59:22 crc kubenswrapper[4746]: E0129 16:59:22.921340 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521\": container with ID starting with 7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521 not found: ID does not exist" containerID="7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.921398 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521"} err="failed to get container status \"7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521\": rpc error: code = NotFound desc = could not find container \"7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521\": container with ID starting with 7857725f74449d6ff1962ebca7db4121df59449ba5d6d5880fda1df534e93521 not found: ID does not exist" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.921430 4746 scope.go:117] "RemoveContainer" containerID="064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e" Jan 29 16:59:22 crc kubenswrapper[4746]: E0129 16:59:22.922778 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e\": container with ID starting with 064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e not found: ID does not exist" containerID="064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.922827 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e"} err="failed to get container status \"064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e\": rpc error: code = NotFound desc = could not find container \"064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e\": container with ID starting with 064a43decdd5ac8621886c562a7dcf80e1ac76120b01802fa035a5c3fcfdca5e not found: ID does not exist" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.922858 4746 scope.go:117] "RemoveContainer" containerID="2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443" Jan 29 16:59:22 crc kubenswrapper[4746]: E0129 16:59:22.928257 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443\": container with ID starting with 2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443 not found: ID does not exist" containerID="2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443" Jan 29 16:59:22 crc kubenswrapper[4746]: I0129 16:59:22.928294 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443"} err="failed to get container status \"2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443\": rpc error: code = NotFound desc = could not find container \"2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443\": container with ID starting with 2467e6c8fa064fc646598a61a3b48a1d8afe1756bfa313e21a2a75924f2ca443 not found: ID does not exist" Jan 29 16:59:24 crc kubenswrapper[4746]: I0129 16:59:24.454857 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" path="/var/lib/kubelet/pods/c3b9ac52-08a9-4d7d-a46c-285ed708fcc6/volumes" Jan 29 16:59:31 crc kubenswrapper[4746]: I0129 16:59:31.091169 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:59:31 crc kubenswrapper[4746]: I0129 16:59:31.095897 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:59:31 crc kubenswrapper[4746]: I0129 16:59:31.102872 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:59:31 crc kubenswrapper[4746]: I0129 16:59:31.849937 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:59:32 crc kubenswrapper[4746]: E0129 16:59:32.823927 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca384131_3efa_43c4_b89c_006e62e467d0.slice/crio-1c2e31baa1f2e44a46515ccaf5446e62ddb595192bbf333b49d129cbf502b565\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e0a053c_6e7f_4c08_84ed_f1c908d76718.slice/crio-4e92778626c4ff44872a0605ddf403f1b9e09783dfa61afd68125c97855e6e0a\": RecentStats: unable to find data in memory cache]" Jan 29 16:59:38 crc kubenswrapper[4746]: E0129 16:59:38.473947 4746 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a24b30031160214bc7f0727051afc811f73b41af7bc93af2092237cb10b05899/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a24b30031160214bc7f0727051afc811f73b41af7bc93af2092237cb10b05899/diff: no such file or directory, extraDiskErr: Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.065032 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.065541 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.065585 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.066299 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.066352 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" gracePeriod=600 Jan 29 16:59:49 crc kubenswrapper[4746]: E0129 16:59:49.196708 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.586800 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.589451 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="0185f119-b92f-4f05-9d0d-f0b2e081c331" containerName="openstackclient" containerID="cri-o://e4ad991870d64b906f98b966d5a79dd93b5367dac6510d8b9f9b6e56123d442e" gracePeriod=2 Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.603162 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.677372 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-m4gmj"] Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.695504 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-m4gmj"] Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.720086 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.794313 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7d9fr"] Jan 29 16:59:49 crc kubenswrapper[4746]: E0129 16:59:49.794853 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="registry-server" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.794870 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="registry-server" Jan 29 16:59:49 crc kubenswrapper[4746]: E0129 16:59:49.794888 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="extract-utilities" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.794895 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="extract-utilities" Jan 29 16:59:49 crc kubenswrapper[4746]: E0129 16:59:49.794910 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0185f119-b92f-4f05-9d0d-f0b2e081c331" containerName="openstackclient" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.794933 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0185f119-b92f-4f05-9d0d-f0b2e081c331" containerName="openstackclient" Jan 29 16:59:49 crc kubenswrapper[4746]: E0129 16:59:49.794950 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="extract-content" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.794956 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="extract-content" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.795221 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0185f119-b92f-4f05-9d0d-f0b2e081c331" containerName="openstackclient" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.795237 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b9ac52-08a9-4d7d-a46c-285ed708fcc6" containerName="registry-server" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.796093 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.807476 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.816969 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts\") pod \"root-account-create-update-7d9fr\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:49 crc kubenswrapper[4746]: I0129 16:59:49.817030 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/a5213774-9475-450b-a26f-2212d807c39f-kube-api-access-nf629\") pod \"root-account-create-update-7d9fr\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.077102 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/a5213774-9475-450b-a26f-2212d807c39f-kube-api-access-nf629\") pod \"root-account-create-update-7d9fr\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.077608 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts\") pod \"root-account-create-update-7d9fr\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.078391 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts\") pod \"root-account-create-update-7d9fr\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.080824 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.080871 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data podName:6b6e0a39-5c0e-4632-bc24-dd8c7eb25788 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:50.58085595 +0000 UTC m=+1512.981440594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data") pod "rabbitmq-cell1-server-0" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.122589 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4017-account-create-update-x8q6t"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.125260 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" exitCode=0 Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.125335 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156"} Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.125388 4746 scope.go:117] "RemoveContainer" containerID="c1bf44a70454193334b73bbbaa8e59d7b095d5f8d7c6a3569af1049d7583b251" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.127288 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.128322 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.153866 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/a5213774-9475-450b-a26f-2212d807c39f-kube-api-access-nf629\") pod \"root-account-create-update-7d9fr\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.166622 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4017-account-create-update-x8q6t"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.228072 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7d9fr"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.257431 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ee86-account-create-update-wj5mf"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.288263 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ee86-account-create-update-wj5mf"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.312033 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-w69t8"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.324649 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-w69t8"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.360972 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-628f-account-create-update-xfw92"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.378249 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-628f-account-create-update-xfw92"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.386063 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pplw4"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.398474 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rm6fv"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.398753 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-rm6fv" podUID="fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" containerName="openstack-network-exporter" containerID="cri-o://bd785516068c374d3a1fab30f0a344202849aa7b8520ee5b2eebfb62b9ebbc3e" gracePeriod=30 Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.410462 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-hlgxj"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.420394 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5505-account-create-update-mcpqs"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.426604 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5505-account-create-update-mcpqs"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.428789 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.437273 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.459911 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d60a101-b5c2-4280-8d06-c7556eaf1535" path="/var/lib/kubelet/pods/5d60a101-b5c2-4280-8d06-c7556eaf1535/volumes" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.460516 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66de13d9-6c00-4d6a-88a3-3fb2266d33aa" path="/var/lib/kubelet/pods/66de13d9-6c00-4d6a-88a3-3fb2266d33aa/volumes" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.461065 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d3b8c6-5997-4881-8b92-b5244c49fd1c" path="/var/lib/kubelet/pods/a3d3b8c6-5997-4881-8b92-b5244c49fd1c/volumes" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.461595 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7312900-d50f-4b7a-9b16-fb9487c1ad62" path="/var/lib/kubelet/pods/d7312900-d50f-4b7a-9b16-fb9487c1ad62/volumes" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.462765 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dadcfd7a-71a2-405b-8487-0dedc7cf9b6a" path="/var/lib/kubelet/pods/dadcfd7a-71a2-405b-8487-0dedc7cf9b6a/volumes" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.463330 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0634b6-22e2-4042-a738-45efb60d6c87" path="/var/lib/kubelet/pods/ed0634b6-22e2-4042-a738-45efb60d6c87/volumes" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.463885 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w422g"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.469699 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w422g"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.514271 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7108-account-create-update-2rrxt"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.530865 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7108-account-create-update-2rrxt"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.545735 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qpnkt"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.562036 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qpnkt"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.592684 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.592899 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="ovn-northd" containerID="cri-o://251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34" gracePeriod=30 Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.593279 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="openstack-network-exporter" containerID="cri-o://8d41c00ff4e878b0ca19eebfb37df14fb06c2ce7bba3e45e02c666faf55cdc88" gracePeriod=30 Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.604741 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.604999 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data podName:6b6e0a39-5c0e-4632-bc24-dd8c7eb25788 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:51.604978942 +0000 UTC m=+1514.005563606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data") pod "rabbitmq-cell1-server-0" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.605033 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.605052 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data podName:71c96526-7c37-42c2-896e-b551dd6ed5b8 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:51.105046704 +0000 UTC m=+1513.505631338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data") pod "rabbitmq-server-0" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8") : configmap "rabbitmq-config-data" not found Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.643237 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n5h5w"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.664850 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n5h5w"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.686067 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3ee3-account-create-update-wwrbf"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.695003 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3ee3-account-create-update-wwrbf"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.765124 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwqs5"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.800615 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rwqs5"] Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.812253 4746 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-pplw4" message=< Jan 29 16:59:50 crc kubenswrapper[4746]: Exiting ovn-controller (1) [ OK ] Jan 29 16:59:50 crc kubenswrapper[4746]: > Jan 29 16:59:50 crc kubenswrapper[4746]: E0129 16:59:50.812287 4746 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-pplw4" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerName="ovn-controller" containerID="cri-o://b6dcfeab99a5a8781df1a90e3a3c6cbe494b01e59a357c3e1aea216f06fcbe66" Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.812318 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-pplw4" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerName="ovn-controller" containerID="cri-o://b6dcfeab99a5a8781df1a90e3a3c6cbe494b01e59a357c3e1aea216f06fcbe66" gracePeriod=30 Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.812736 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-2bgxn"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.812994 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerName="dnsmasq-dns" containerID="cri-o://cd7bdfeaf74aad9a30010243d284207e43e919190849f1ae0003a99378ba6011" gracePeriod=10 Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.867022 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-d2tsk"] Jan 29 16:59:50 crc kubenswrapper[4746]: I0129 16:59:50.894862 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-d2tsk"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.008502 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ls92k"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.030987 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ls92k"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.054057 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c4b578977-hfn59"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.055267 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c4b578977-hfn59" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-api" containerID="cri-o://ffe4f88f98c0c616c8a6607cb72e6acd7cdee0142ea8746e929924d4801cbfca" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.055421 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c4b578977-hfn59" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-httpd" containerID="cri-o://8570c70a880e99072977cb4e1698d7dd3b7ba1f3aac7236951149c68e8cd523d" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.121831 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.121896 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data podName:71c96526-7c37-42c2-896e-b551dd6ed5b8 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:52.121881166 +0000 UTC m=+1514.522465810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data") pod "rabbitmq-server-0" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8") : configmap "rabbitmq-config-data" not found Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.145045 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rm6fv_fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac/openstack-network-exporter/0.log" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.145091 4746 generic.go:334] "Generic (PLEG): container finished" podID="fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" containerID="bd785516068c374d3a1fab30f0a344202849aa7b8520ee5b2eebfb62b9ebbc3e" exitCode=2 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.145145 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rm6fv" event={"ID":"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac","Type":"ContainerDied","Data":"bd785516068c374d3a1fab30f0a344202849aa7b8520ee5b2eebfb62b9ebbc3e"} Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.157574 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerID="cd7bdfeaf74aad9a30010243d284207e43e919190849f1ae0003a99378ba6011" exitCode=0 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.157673 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" event={"ID":"d1205318-995d-4d3f-8c94-4faab5e1e48a","Type":"ContainerDied","Data":"cd7bdfeaf74aad9a30010243d284207e43e919190849f1ae0003a99378ba6011"} Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.177079 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.180322 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="openstack-network-exporter" containerID="cri-o://3c2e531058476f465ac3dbbb01033f0d27b609383659cd5f42cf8efcfad81000" gracePeriod=300 Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.183424 4746 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 16:59:51 crc kubenswrapper[4746]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 16:59:51 crc kubenswrapper[4746]: + source /usr/local/bin/container-scripts/functions Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNBridge=br-int Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNRemote=tcp:localhost:6642 Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNEncapType=geneve Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNAvailabilityZones= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ EnableChassisAsGateway=true Jan 29 16:59:51 crc kubenswrapper[4746]: ++ PhysicalNetworks= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNHostName= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 16:59:51 crc kubenswrapper[4746]: ++ ovs_dir=/var/lib/openvswitch Jan 29 16:59:51 crc kubenswrapper[4746]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 16:59:51 crc kubenswrapper[4746]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 16:59:51 crc kubenswrapper[4746]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:59:51 crc kubenswrapper[4746]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:59:51 crc kubenswrapper[4746]: + sleep 0.5 Jan 29 16:59:51 crc kubenswrapper[4746]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:59:51 crc kubenswrapper[4746]: + cleanup_ovsdb_server_semaphore Jan 29 16:59:51 crc kubenswrapper[4746]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:59:51 crc kubenswrapper[4746]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 16:59:51 crc kubenswrapper[4746]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-hlgxj" message=< Jan 29 16:59:51 crc kubenswrapper[4746]: Exiting ovsdb-server (5) [ OK ] Jan 29 16:59:51 crc kubenswrapper[4746]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 16:59:51 crc kubenswrapper[4746]: + source /usr/local/bin/container-scripts/functions Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNBridge=br-int Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNRemote=tcp:localhost:6642 Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNEncapType=geneve Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNAvailabilityZones= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ EnableChassisAsGateway=true Jan 29 16:59:51 crc kubenswrapper[4746]: ++ PhysicalNetworks= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNHostName= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 16:59:51 crc kubenswrapper[4746]: ++ ovs_dir=/var/lib/openvswitch Jan 29 16:59:51 crc kubenswrapper[4746]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 16:59:51 crc kubenswrapper[4746]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 16:59:51 crc kubenswrapper[4746]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:59:51 crc kubenswrapper[4746]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:59:51 crc kubenswrapper[4746]: + sleep 0.5 Jan 29 16:59:51 crc kubenswrapper[4746]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:59:51 crc kubenswrapper[4746]: + cleanup_ovsdb_server_semaphore Jan 29 16:59:51 crc kubenswrapper[4746]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:59:51 crc kubenswrapper[4746]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 16:59:51 crc kubenswrapper[4746]: > Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.183458 4746 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 16:59:51 crc kubenswrapper[4746]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 16:59:51 crc kubenswrapper[4746]: + source /usr/local/bin/container-scripts/functions Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNBridge=br-int Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNRemote=tcp:localhost:6642 Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNEncapType=geneve Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNAvailabilityZones= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ EnableChassisAsGateway=true Jan 29 16:59:51 crc kubenswrapper[4746]: ++ PhysicalNetworks= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ OVNHostName= Jan 29 16:59:51 crc kubenswrapper[4746]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 16:59:51 crc kubenswrapper[4746]: ++ ovs_dir=/var/lib/openvswitch Jan 29 16:59:51 crc kubenswrapper[4746]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 16:59:51 crc kubenswrapper[4746]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 16:59:51 crc kubenswrapper[4746]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:59:51 crc kubenswrapper[4746]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:59:51 crc kubenswrapper[4746]: + sleep 0.5 Jan 29 16:59:51 crc kubenswrapper[4746]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:59:51 crc kubenswrapper[4746]: + cleanup_ovsdb_server_semaphore Jan 29 16:59:51 crc kubenswrapper[4746]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:59:51 crc kubenswrapper[4746]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 16:59:51 crc kubenswrapper[4746]: > pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" containerID="cri-o://ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.183487 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" containerID="cri-o://ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.188322 4746 generic.go:334] "Generic (PLEG): container finished" podID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerID="8d41c00ff4e878b0ca19eebfb37df14fb06c2ce7bba3e45e02c666faf55cdc88" exitCode=2 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.190032 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cdeb76e4-0143-44ad-935d-eb486d6fa9dc","Type":"ContainerDied","Data":"8d41c00ff4e878b0ca19eebfb37df14fb06c2ce7bba3e45e02c666faf55cdc88"} Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.214840 4746 generic.go:334] "Generic (PLEG): container finished" podID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerID="b6dcfeab99a5a8781df1a90e3a3c6cbe494b01e59a357c3e1aea216f06fcbe66" exitCode=0 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.214883 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4" event={"ID":"9d0831ca-9258-426a-b0d5-9ae88e24daa2","Type":"ContainerDied","Data":"b6dcfeab99a5a8781df1a90e3a3c6cbe494b01e59a357c3e1aea216f06fcbe66"} Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.226672 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" containerID="cri-o://b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.307761 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="ovsdbserver-sb" containerID="cri-o://853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830" gracePeriod=300 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.320873 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.323182 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="openstack-network-exporter" containerID="cri-o://67b114c6f1d8b35d3fbd8d9d423762318eaad81860bbb0ff538250cf11081b4c" gracePeriod=300 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.325466 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rm6fv_fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac/openstack-network-exporter/0.log" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.325550 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.334685 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-metrics-certs-tls-certs\") pod \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.334785 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovs-rundir\") pod \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.334870 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rpzq\" (UniqueName: \"kubernetes.io/projected/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-kube-api-access-9rpzq\") pod \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.334998 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovn-rundir\") pod \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.335023 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-combined-ca-bundle\") pod \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.335044 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-config\") pod \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\" (UID: \"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.343212 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-p9k8d"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.343312 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" (UID: "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.346058 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" (UID: "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.351830 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-config" (OuterVolumeSpecName: "config") pod "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" (UID: "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.347071 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.375723 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9b7cbf56d-9h4gg"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.376000 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-9b7cbf56d-9h4gg" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-log" containerID="cri-o://fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.380225 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-kube-api-access-9rpzq" (OuterVolumeSpecName: "kube-api-access-9rpzq") pod "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" (UID: "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac"). InnerVolumeSpecName "kube-api-access-9rpzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.380317 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-9b7cbf56d-9h4gg" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-api" containerID="cri-o://08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.391858 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" (UID: "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.416126 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="ovsdbserver-nb" containerID="cri-o://3126dbd2cf50d8c8e7a9683b6da26e8324aad907d712924fe7acecce195f923a" gracePeriod=300 Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.417701 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830 is running failed: container process not found" containerID="853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.422642 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830 is running failed: container process not found" containerID="853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.423874 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830 is running failed: container process not found" containerID="853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.423914 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="ovsdbserver-sb" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.445367 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-p9k8d"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.456634 4746 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.456658 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rpzq\" (UniqueName: \"kubernetes.io/projected/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-kube-api-access-9rpzq\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.456668 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.456677 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.488774 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.489009 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-log" containerID="cri-o://61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.489352 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-httpd" containerID="cri-o://f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.503522 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.503994 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-server" containerID="cri-o://22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504108 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="swift-recon-cron" containerID="cri-o://ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504154 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="rsync" containerID="cri-o://3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504203 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-expirer" containerID="cri-o://0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504234 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-updater" containerID="cri-o://d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504260 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-auditor" containerID="cri-o://cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504286 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-replicator" containerID="cri-o://77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504313 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-server" containerID="cri-o://66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504340 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-updater" containerID="cri-o://7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504369 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-auditor" containerID="cri-o://86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504396 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-replicator" containerID="cri-o://f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504423 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-server" containerID="cri-o://9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504454 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-reaper" containerID="cri-o://d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504485 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-auditor" containerID="cri-o://30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.504513 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-replicator" containerID="cri-o://55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.533338 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vz8vc"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.541150 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vz8vc"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.558451 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.558693 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerName="nova-scheduler-scheduler" containerID="cri-o://905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.595270 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7d9fr"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.635340 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.635683 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-log" containerID="cri-o://d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.636244 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-httpd" containerID="cri-o://cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.656242 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-68qbv"] Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.671119 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:51 crc kubenswrapper[4746]: E0129 16:59:51.671214 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data podName:6b6e0a39-5c0e-4632-bc24-dd8c7eb25788 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:53.671178217 +0000 UTC m=+1516.071762861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data") pod "rabbitmq-cell1-server-0" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.672001 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" (UID: "fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.696966 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-68qbv"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.709709 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56c7-account-create-update-cjx52"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.719457 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56c7-account-create-update-cjx52"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.726814 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.727722 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-api" containerID="cri-o://90d0f7c0ec8bee68f1032e1115bb3957e1cc29de95dedf8075f362d0b3ca5802" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.727965 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-log" containerID="cri-o://f5dbc0994f4e33f3d35d508e2ee9e277a69d60f776de81a42fb9ff89c6a2d705" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.735092 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-z7zjw"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.748930 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-z7zjw"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.762245 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.762779 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-log" containerID="cri-o://9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.765250 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-metadata" containerID="cri-o://34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.775069 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.780262 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.788437 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-t5tbp"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.800047 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hpls4"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.810262 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-t5tbp"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.843283 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hpls4"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.849301 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.864642 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.864939 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api-log" containerID="cri-o://14a457ada9ded8a131b71b82b1d68aaab179b4e4656c9f30a8ff693e5705512c" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.865103 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api" containerID="cri-o://1a56f205c5cc1a3a3d1140e62eef702ab9a791c26bc2dc47a9b8cf3218933c17" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.878934 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb4gq\" (UniqueName: \"kubernetes.io/projected/9d0831ca-9258-426a-b0d5-9ae88e24daa2-kube-api-access-lb4gq\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.878979 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-log-ovn\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.879008 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run-ovn\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.879074 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-ovn-controller-tls-certs\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.879108 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d0831ca-9258-426a-b0d5-9ae88e24daa2-scripts\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.879147 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-combined-ca-bundle\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.879888 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run\") pod \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\" (UID: \"9d0831ca-9258-426a-b0d5-9ae88e24daa2\") " Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.880503 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run" (OuterVolumeSpecName: "var-run") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.886587 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.888663 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.896675 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0884-account-create-update-m66xk"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.897150 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0831ca-9258-426a-b0d5-9ae88e24daa2-scripts" (OuterVolumeSpecName: "scripts") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.922594 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0831ca-9258-426a-b0d5-9ae88e24daa2-kube-api-access-lb4gq" (OuterVolumeSpecName: "kube-api-access-lb4gq") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "kube-api-access-lb4gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.969219 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0884-account-create-update-m66xk"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983452 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb4gq\" (UniqueName: \"kubernetes.io/projected/9d0831ca-9258-426a-b0d5-9ae88e24daa2-kube-api-access-lb4gq\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983482 4746 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983491 4746 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983501 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d0831ca-9258-426a-b0d5-9ae88e24daa2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983509 4746 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d0831ca-9258-426a-b0d5-9ae88e24daa2-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983682 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.983910 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="cinder-scheduler" containerID="cri-o://59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.984323 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="probe" containerID="cri-o://0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f" gracePeriod=30 Jan 29 16:59:51 crc kubenswrapper[4746]: I0129 16:59:51.996964 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xhqj8"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.005515 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xhqj8"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.013210 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-797rg"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.019665 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-797rg"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.027947 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.039388 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.045567 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lmdqh"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.051866 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lmdqh"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.052293 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.059253 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.072775 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.072838 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerName="nova-scheduler-scheduler" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.087587 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7cd65c77b7-kbbjb"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.087865 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-httpd" containerID="cri-o://64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.088019 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-server" containerID="cri-o://8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.088718 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.096655 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-fd8d7b7c5-2bjng"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.096959 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker-log" containerID="cri-o://d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.097376 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker" containerID="cri-o://d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.108580 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "9d0831ca-9258-426a-b0d5-9ae88e24daa2" (UID: "9d0831ca-9258-426a-b0d5-9ae88e24daa2"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.108932 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.109152 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.111109 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="rabbitmq" containerID="cri-o://9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553" gracePeriod=604800 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.123458 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-64978c9b7d-d9wgb"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.123712 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener-log" containerID="cri-o://c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.124120 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener" containerID="cri-o://2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.133070 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.154297 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-654869dd86-s9th4"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.154559 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-654869dd86-s9th4" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api-log" containerID="cri-o://bda6ef59fe6c6aa36650accec8c47f1fb248a6d1176f6aed98b5503facb4cdb6" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.155046 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-654869dd86-s9th4" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api" containerID="cri-o://c78b0cc4c733ab33d81ae04bcb4447f430f04b3f564487ae982eadf1d345566d" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.190841 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d0831ca-9258-426a-b0d5-9ae88e24daa2-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.190907 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.190959 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data podName:71c96526-7c37-42c2-896e-b551dd6ed5b8 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:54.190945379 +0000 UTC m=+1516.591530023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data") pod "rabbitmq-server-0" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8") : configmap "rabbitmq-config-data" not found Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.215262 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="rabbitmq" containerID="cri-o://6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45" gracePeriod=604800 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.221293 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="galera" containerID="cri-o://fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.306855 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7d9fr" event={"ID":"a5213774-9475-450b-a26f-2212d807c39f","Type":"ContainerStarted","Data":"3299a49c2bda8c3c664a734d50b79ebe8796b2c46837c3cabd449a4ea46e4c1d"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.356953 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.357166 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.363281 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4555f5c-9440-4402-96f9-e2bf40c5cfb1/ovsdbserver-nb/0.log" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.363323 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerID="67b114c6f1d8b35d3fbd8d9d423762318eaad81860bbb0ff538250cf11081b4c" exitCode=2 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.363439 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerID="3126dbd2cf50d8c8e7a9683b6da26e8324aad907d712924fe7acecce195f923a" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.363619 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4555f5c-9440-4402-96f9-e2bf40c5cfb1","Type":"ContainerDied","Data":"67b114c6f1d8b35d3fbd8d9d423762318eaad81860bbb0ff538250cf11081b4c"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.363765 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4555f5c-9440-4402-96f9-e2bf40c5cfb1","Type":"ContainerDied","Data":"3126dbd2cf50d8c8e7a9683b6da26e8324aad907d712924fe7acecce195f923a"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.371881 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rm6fv_fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac/openstack-network-exporter/0.log" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.372209 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rm6fv" event={"ID":"fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac","Type":"ContainerDied","Data":"a6036d8b84305af51faed49df55cdf4b3f55a3a87a4f810532cd0208309771f4"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.372455 4746 scope.go:117] "RemoveContainer" containerID="bd785516068c374d3a1fab30f0a344202849aa7b8520ee5b2eebfb62b9ebbc3e" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.373087 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rm6fv" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.380836 4746 generic.go:334] "Generic (PLEG): container finished" podID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerID="d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.380904 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6","Type":"ContainerDied","Data":"d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.386532 4746 generic.go:334] "Generic (PLEG): container finished" podID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerID="61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.386800 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f93a42f7-a972-44c2-a2a4-5f698ba4caf7","Type":"ContainerDied","Data":"61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.389696 4746 generic.go:334] "Generic (PLEG): container finished" podID="0185f119-b92f-4f05-9d0d-f0b2e081c331" containerID="e4ad991870d64b906f98b966d5a79dd93b5367dac6510d8b9f9b6e56123d442e" exitCode=137 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.389778 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rjblb"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.390971 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.410365 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pplw4" event={"ID":"9d0831ca-9258-426a-b0d5-9ae88e24daa2","Type":"ContainerDied","Data":"3280aea4a753fd21e0bd1b9fd9444acee1b1712f409c4d01fd7d4fb20141f833"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.410557 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pplw4" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.416328 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rjblb"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.430959 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q96jb"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450570 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450621 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450631 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450638 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450783 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450801 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450809 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450817 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450825 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450833 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450842 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.450849 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.461392 4746 scope.go:117] "RemoveContainer" containerID="b6dcfeab99a5a8781df1a90e3a3c6cbe494b01e59a357c3e1aea216f06fcbe66" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.461949 4746 generic.go:334] "Generic (PLEG): container finished" podID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.493440 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1f9808-a20d-4fdc-b2ea-586d3b917cc8" path="/var/lib/kubelet/pods/0f1f9808-a20d-4fdc-b2ea-586d3b917cc8/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.494507 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2472226d-c8d4-48ef-a2da-f3dc8fd9695b" path="/var/lib/kubelet/pods/2472226d-c8d4-48ef-a2da-f3dc8fd9695b/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.495554 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab64d41-3d73-42d4-abfc-7c65b9c54970" path="/var/lib/kubelet/pods/2ab64d41-3d73-42d4-abfc-7c65b9c54970/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.499318 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3404f909-99a2-4dd2-b7c4-0990f400d875" path="/var/lib/kubelet/pods/3404f909-99a2-4dd2-b7c4-0990f400d875/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.502759 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380b315f-5021-4a7c-892b-99545fb9c5cd" path="/var/lib/kubelet/pods/380b315f-5021-4a7c-892b-99545fb9c5cd/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.502822 4746 generic.go:334] "Generic (PLEG): container finished" podID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerID="f5dbc0994f4e33f3d35d508e2ee9e277a69d60f776de81a42fb9ff89c6a2d705" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.503372 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1" path="/var/lib/kubelet/pods/3bbafa75-dfb9-4e1f-91c3-17eb6dbc7ab1/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.503835 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d51988e-2959-4dc6-af55-b3c40c2428ee" path="/var/lib/kubelet/pods/4d51988e-2959-4dc6-af55-b3c40c2428ee/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.505608 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a81565e-25dc-4269-8e78-c953acef207b" path="/var/lib/kubelet/pods/5a81565e-25dc-4269-8e78-c953acef207b/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.506247 4746 generic.go:334] "Generic (PLEG): container finished" podID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerID="9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.506286 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8319ac14-e61e-4c9f-b22a-b2f08d0a6723" path="/var/lib/kubelet/pods/8319ac14-e61e-4c9f-b22a-b2f08d0a6723/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.507104 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f44c462-c033-46c5-a3f8-090bb92234c8" path="/var/lib/kubelet/pods/8f44c462-c033-46c5-a3f8-090bb92234c8/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.507877 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96040470-4dec-4d11-9751-860b901ca710" path="/var/lib/kubelet/pods/96040470-4dec-4d11-9751-860b901ca710/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.508792 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc85a95-136d-4ffe-97ab-adea84894a76" path="/var/lib/kubelet/pods/abc85a95-136d-4ffe-97ab-adea84894a76/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.510949 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6375d75-bb3d-4f1e-a5d7-3474f937d241" path="/var/lib/kubelet/pods/b6375d75-bb3d-4f1e-a5d7-3474f937d241/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.511762 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71f6b91-adc4-409b-80f6-8255c8c98f1a" path="/var/lib/kubelet/pods/b71f6b91-adc4-409b-80f6-8255c8c98f1a/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.512590 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9c66b9-97e0-49b8-8229-2e90537ad349" path="/var/lib/kubelet/pods/cb9c66b9-97e0-49b8-8229-2e90537ad349/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.514417 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb9db3bc-78e5-462d-80cb-8022f80959ab" path="/var/lib/kubelet/pods/cb9db3bc-78e5-462d-80cb-8022f80959ab/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.515160 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc2d9bf4-a560-4888-bd41-01b29066a20c" path="/var/lib/kubelet/pods/cc2d9bf4-a560-4888-bd41-01b29066a20c/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.516307 4746 generic.go:334] "Generic (PLEG): container finished" podID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerID="fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.517272 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbd102d-2909-4060-a027-5ebcc13063fb" path="/var/lib/kubelet/pods/dcbd102d-2909-4060-a027-5ebcc13063fb/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.519840 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d29372-486b-4db2-8e2e-cd09059c9edc" path="/var/lib/kubelet/pods/e8d29372-486b-4db2-8e2e-cd09059c9edc/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.520880 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f92e2416-cc6d-4276-a96b-446a90bb18c0" path="/var/lib/kubelet/pods/f92e2416-cc6d-4276-a96b-446a90bb18c0/volumes" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.534144 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-svc\") pod \"d1205318-995d-4d3f-8c94-4faab5e1e48a\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.534266 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxmnd\" (UniqueName: \"kubernetes.io/projected/d1205318-995d-4d3f-8c94-4faab5e1e48a-kube-api-access-cxmnd\") pod \"d1205318-995d-4d3f-8c94-4faab5e1e48a\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.534325 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-sb\") pod \"d1205318-995d-4d3f-8c94-4faab5e1e48a\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.534386 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-swift-storage-0\") pod \"d1205318-995d-4d3f-8c94-4faab5e1e48a\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.534443 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-nb\") pod \"d1205318-995d-4d3f-8c94-4faab5e1e48a\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.534467 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-config\") pod \"d1205318-995d-4d3f-8c94-4faab5e1e48a\" (UID: \"d1205318-995d-4d3f-8c94-4faab5e1e48a\") " Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.543100 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1205318-995d-4d3f-8c94-4faab5e1e48a-kube-api-access-cxmnd" (OuterVolumeSpecName: "kube-api-access-cxmnd") pod "d1205318-995d-4d3f-8c94-4faab5e1e48a" (UID: "d1205318-995d-4d3f-8c94-4faab5e1e48a"). InnerVolumeSpecName "kube-api-access-cxmnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.551952 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_56be91c6-da82-45b5-9b98-d5b6f05f244e/ovsdbserver-sb/0.log" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.552535 4746 generic.go:334] "Generic (PLEG): container finished" podID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerID="3c2e531058476f465ac3dbbb01033f0d27b609383659cd5f42cf8efcfad81000" exitCode=2 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.552554 4746 generic.go:334] "Generic (PLEG): container finished" podID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerID="853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.581609 4746 generic.go:334] "Generic (PLEG): container finished" podID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerID="8570c70a880e99072977cb4e1698d7dd3b7ba1f3aac7236951149c68e8cd523d" exitCode=0 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.597833 4746 generic.go:334] "Generic (PLEG): container finished" podID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerID="14a457ada9ded8a131b71b82b1d68aaab179b4e4656c9f30a8ff693e5705512c" exitCode=143 Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.639492 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.641947 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.642561 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.642592 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.643275 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.645538 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxmnd\" (UniqueName: \"kubernetes.io/projected/d1205318-995d-4d3f-8c94-4faab5e1e48a-kube-api-access-cxmnd\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.649054 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.665158 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1205318-995d-4d3f-8c94-4faab5e1e48a" (UID: "d1205318-995d-4d3f-8c94-4faab5e1e48a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.684929 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.684998 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.716404 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1205318-995d-4d3f-8c94-4faab5e1e48a" (UID: "d1205318-995d-4d3f-8c94-4faab5e1e48a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.734729 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1205318-995d-4d3f-8c94-4faab5e1e48a" (UID: "d1205318-995d-4d3f-8c94-4faab5e1e48a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.756275 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.756384 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.756450 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.774049 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-config" (OuterVolumeSpecName: "config") pod "d1205318-995d-4d3f-8c94-4faab5e1e48a" (UID: "d1205318-995d-4d3f-8c94-4faab5e1e48a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.781883 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1205318-995d-4d3f-8c94-4faab5e1e48a" (UID: "d1205318-995d-4d3f-8c94-4faab5e1e48a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.857762 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.857784 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1205318-995d-4d3f-8c94-4faab5e1e48a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895012 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895056 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895146 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q96jb"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895238 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895265 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895583 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerName="nova-cell1-conductor-conductor" containerID="cri-o://c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" gracePeriod=30 Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895900 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895919 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.895931 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e"} Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.896893 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897118 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rm6fv"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897150 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-rm6fv"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897166 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pplw4"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897180 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pplw4"] Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897244 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897261 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897273 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897286 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897307 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897321 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerDied","Data":"ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897336 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cfbdab6-8b6e-4199-808c-be07e373df64","Type":"ContainerDied","Data":"f5dbc0994f4e33f3d35d508e2ee9e277a69d60f776de81a42fb9ff89c6a2d705"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abd6dc02-1269-43b8-a1aa-d239875e4902","Type":"ContainerDied","Data":"9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7cbf56d-9h4gg" event={"ID":"9db12a59-b8e4-43e4-add4-9cb361cfe6c5","Type":"ContainerDied","Data":"fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897422 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"56be91c6-da82-45b5-9b98-d5b6f05f244e","Type":"ContainerDied","Data":"3c2e531058476f465ac3dbbb01033f0d27b609383659cd5f42cf8efcfad81000"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897437 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"56be91c6-da82-45b5-9b98-d5b6f05f244e","Type":"ContainerDied","Data":"853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897450 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c4b578977-hfn59" event={"ID":"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6","Type":"ContainerDied","Data":"8570c70a880e99072977cb4e1698d7dd3b7ba1f3aac7236951149c68e8cd523d"} Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.897463 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf727c52-99b6-4ab8-9815-4ab8c2dd5050","Type":"ContainerDied","Data":"14a457ada9ded8a131b71b82b1d68aaab179b4e4656c9f30a8ff693e5705512c"} Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.898889 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.900580 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:59:52 crc kubenswrapper[4746]: E0129 16:59:52.900622 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" containerName="nova-cell0-conductor-conductor" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.904501 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_56be91c6-da82-45b5-9b98-d5b6f05f244e/ovsdbserver-sb/0.log" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.904561 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:59:52 crc kubenswrapper[4746]: I0129 16:59:52.964443 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060270 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-config\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060398 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config\") pod \"0185f119-b92f-4f05-9d0d-f0b2e081c331\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060497 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-combined-ca-bundle\") pod \"0185f119-b92f-4f05-9d0d-f0b2e081c331\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060595 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config-secret\") pod \"0185f119-b92f-4f05-9d0d-f0b2e081c331\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060623 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-metrics-certs-tls-certs\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060693 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j99th\" (UniqueName: \"kubernetes.io/projected/56be91c6-da82-45b5-9b98-d5b6f05f244e-kube-api-access-j99th\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060715 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg9f4\" (UniqueName: \"kubernetes.io/projected/0185f119-b92f-4f05-9d0d-f0b2e081c331-kube-api-access-rg9f4\") pod \"0185f119-b92f-4f05-9d0d-f0b2e081c331\" (UID: \"0185f119-b92f-4f05-9d0d-f0b2e081c331\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060743 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdb-rundir\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060796 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-scripts\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060855 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-combined-ca-bundle\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060881 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.060917 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdbserver-sb-tls-certs\") pod \"56be91c6-da82-45b5-9b98-d5b6f05f244e\" (UID: \"56be91c6-da82-45b5-9b98-d5b6f05f244e\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.061164 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-config" (OuterVolumeSpecName: "config") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.061625 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.061859 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.061886 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.062280 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-scripts" (OuterVolumeSpecName: "scripts") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.071234 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0185f119-b92f-4f05-9d0d-f0b2e081c331-kube-api-access-rg9f4" (OuterVolumeSpecName: "kube-api-access-rg9f4") pod "0185f119-b92f-4f05-9d0d-f0b2e081c331" (UID: "0185f119-b92f-4f05-9d0d-f0b2e081c331"). InnerVolumeSpecName "kube-api-access-rg9f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.094319 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.094365 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56be91c6-da82-45b5-9b98-d5b6f05f244e-kube-api-access-j99th" (OuterVolumeSpecName: "kube-api-access-j99th") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "kube-api-access-j99th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.102908 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0185f119-b92f-4f05-9d0d-f0b2e081c331" (UID: "0185f119-b92f-4f05-9d0d-f0b2e081c331"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.120697 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.123273 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4555f5c-9440-4402-96f9-e2bf40c5cfb1/ovsdbserver-nb/0.log" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.123344 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.126277 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0185f119-b92f-4f05-9d0d-f0b2e081c331" (UID: "0185f119-b92f-4f05-9d0d-f0b2e081c331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.132763 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0185f119-b92f-4f05-9d0d-f0b2e081c331" (UID: "0185f119-b92f-4f05-9d0d-f0b2e081c331"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163419 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163454 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163466 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0185f119-b92f-4f05-9d0d-f0b2e081c331-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163482 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j99th\" (UniqueName: \"kubernetes.io/projected/56be91c6-da82-45b5-9b98-d5b6f05f244e-kube-api-access-j99th\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163493 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg9f4\" (UniqueName: \"kubernetes.io/projected/0185f119-b92f-4f05-9d0d-f0b2e081c331-kube-api-access-rg9f4\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163502 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56be91c6-da82-45b5-9b98-d5b6f05f244e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163510 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.163528 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.174047 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.185903 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.207405 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "56be91c6-da82-45b5-9b98-d5b6f05f244e" (UID: "56be91c6-da82-45b5-9b98-d5b6f05f244e"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.232687 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266072 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-combined-ca-bundle\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266156 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdb-rundir\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266206 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-metrics-certs-tls-certs\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266249 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-scripts\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266288 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdbserver-nb-tls-certs\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266402 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-config\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.266453 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshwf\" (UniqueName: \"kubernetes.io/projected/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-kube-api-access-tshwf\") pod \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\" (UID: \"b4555f5c-9440-4402-96f9-e2bf40c5cfb1\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.267863 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.267884 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.267893 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56be91c6-da82-45b5-9b98-d5b6f05f244e-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.269020 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-config" (OuterVolumeSpecName: "config") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.269353 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.269926 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-scripts" (OuterVolumeSpecName: "scripts") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.297511 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.304362 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.318426 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-kube-api-access-tshwf" (OuterVolumeSpecName: "kube-api-access-tshwf") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "kube-api-access-tshwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.338544 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.368589 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v228n\" (UniqueName: \"kubernetes.io/projected/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-kube-api-access-v228n\") pod \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.368639 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-vencrypt-tls-certs\") pod \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.368777 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-nova-novncproxy-tls-certs\") pod \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.368837 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-combined-ca-bundle\") pod \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.368888 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-config-data\") pod \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\" (UID: \"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.369496 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.369509 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.369518 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tshwf\" (UniqueName: \"kubernetes.io/projected/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-kube-api-access-tshwf\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.369529 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.369537 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.369545 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.377008 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-kube-api-access-v228n" (OuterVolumeSpecName: "kube-api-access-v228n") pod "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" (UID: "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df"). InnerVolumeSpecName "kube-api-access-v228n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.393600 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.410573 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.429115 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b4555f5c-9440-4402-96f9-e2bf40c5cfb1" (UID: "b4555f5c-9440-4402-96f9-e2bf40c5cfb1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.427504 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" (UID: "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.431888 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-config-data" (OuterVolumeSpecName: "config-data") pod "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" (UID: "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.465534 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" (UID: "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.470570 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-internal-tls-certs\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.470680 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-etc-swift\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.470829 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-log-httpd\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.470875 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-public-tls-certs\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.470900 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-config-data\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.470939 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-combined-ca-bundle\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.471010 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgk5t\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-kube-api-access-vgk5t\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.471065 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-run-httpd\") pod \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\" (UID: \"5c0d701f-0ba6-4836-b3f9-1425b411d80d\") " Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.471881 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472053 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472068 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v228n\" (UniqueName: \"kubernetes.io/projected/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-kube-api-access-v228n\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472081 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4555f5c-9440-4402-96f9-e2bf40c5cfb1-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472093 4746 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472106 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472118 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.472469 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.477630 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.483845 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.486613 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" (UID: "8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.506477 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-kube-api-access-vgk5t" (OuterVolumeSpecName: "kube-api-access-vgk5t") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "kube-api-access-vgk5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.531847 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-config-data" (OuterVolumeSpecName: "config-data") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.574457 4746 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.575847 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.576423 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.576561 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgk5t\" (UniqueName: \"kubernetes.io/projected/5c0d701f-0ba6-4836-b3f9-1425b411d80d-kube-api-access-vgk5t\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.576672 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c0d701f-0ba6-4836-b3f9-1425b411d80d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.576776 4746 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.580146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.593955 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.619304 4746 generic.go:334] "Generic (PLEG): container finished" podID="a5213774-9475-450b-a26f-2212d807c39f" containerID="4dfe489e2b680e7d9a9284cb8542834e8149f37a867c50860143e6bb9632324b" exitCode=1 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.619405 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7d9fr" event={"ID":"a5213774-9475-450b-a26f-2212d807c39f","Type":"ContainerDied","Data":"4dfe489e2b680e7d9a9284cb8542834e8149f37a867c50860143e6bb9632324b"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.619929 4746 scope.go:117] "RemoveContainer" containerID="4dfe489e2b680e7d9a9284cb8542834e8149f37a867c50860143e6bb9632324b" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.638392 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c0d701f-0ba6-4836-b3f9-1425b411d80d" (UID: "5c0d701f-0ba6-4836-b3f9-1425b411d80d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.644620 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b4555f5c-9440-4402-96f9-e2bf40c5cfb1/ovsdbserver-nb/0.log" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.644737 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b4555f5c-9440-4402-96f9-e2bf40c5cfb1","Type":"ContainerDied","Data":"1e51bb42588a10a699c38f4f4fb5f3c36d31e6ea69da67e5787d4cadad0b65bf"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.644779 4746 scope.go:117] "RemoveContainer" containerID="67b114c6f1d8b35d3fbd8d9d423762318eaad81860bbb0ff538250cf11081b4c" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.644910 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.674331 4746 generic.go:334] "Generic (PLEG): container finished" podID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerID="d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5" exitCode=143 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.674399 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" event={"ID":"f19d23b1-5d41-40a9-88ee-23a039de0ed7","Type":"ContainerDied","Data":"d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.681359 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.681390 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.681402 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0d701f-0ba6-4836-b3f9-1425b411d80d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:53 crc kubenswrapper[4746]: E0129 16:59:53.681492 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:53 crc kubenswrapper[4746]: E0129 16:59:53.681626 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data podName:6b6e0a39-5c0e-4632-bc24-dd8c7eb25788 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:57.681533012 +0000 UTC m=+1520.082117656 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data") pod "rabbitmq-cell1-server-0" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.697161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" event={"ID":"d1205318-995d-4d3f-8c94-4faab5e1e48a","Type":"ContainerDied","Data":"e06fbd5a7ae2e801c15c65c4eb3c5043b1813a02a3ce53ba997e0417b6330ee5"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.697300 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-2bgxn" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.701981 4746 generic.go:334] "Generic (PLEG): container finished" podID="8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" containerID="c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d" exitCode=0 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.702052 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df","Type":"ContainerDied","Data":"c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.702081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df","Type":"ContainerDied","Data":"60ca2f05d0ce1b39248b519dec6ffa6cd71d00d643127d49b94250b92f519a7a"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.702143 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.710074 4746 generic.go:334] "Generic (PLEG): container finished" podID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerID="8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871" exitCode=0 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.710103 4746 generic.go:334] "Generic (PLEG): container finished" podID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerID="64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09" exitCode=0 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.710134 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.710149 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" event={"ID":"5c0d701f-0ba6-4836-b3f9-1425b411d80d","Type":"ContainerDied","Data":"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.710175 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" event={"ID":"5c0d701f-0ba6-4836-b3f9-1425b411d80d","Type":"ContainerDied","Data":"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.710205 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cd65c77b7-kbbjb" event={"ID":"5c0d701f-0ba6-4836-b3f9-1425b411d80d","Type":"ContainerDied","Data":"8e1c8926e9dcf96c4c9de7e66da55db6276e763c8c2690d21c37f97e0a95ca24"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.719714 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab" exitCode=0 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.719743 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf" exitCode=0 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.719788 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.719811 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.723366 4746 generic.go:334] "Generic (PLEG): container finished" podID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerID="c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0" exitCode=143 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.723434 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" event={"ID":"c530fa14-8291-45d6-800c-54fd9716fa1d","Type":"ContainerDied","Data":"c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.737045 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_56be91c6-da82-45b5-9b98-d5b6f05f244e/ovsdbserver-sb/0.log" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.737134 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"56be91c6-da82-45b5-9b98-d5b6f05f244e","Type":"ContainerDied","Data":"5097f64a55ef9ce59ef3562295883e2ce21b691c3d93b381573caf41b0d08415"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.737410 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.761393 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="galera" probeResult="failure" output="command timed out" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.764710 4746 generic.go:334] "Generic (PLEG): container finished" podID="76935545-e8e3-4523-97b0-edce25c6756d" containerID="0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f" exitCode=0 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.764760 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76935545-e8e3-4523-97b0-edce25c6756d","Type":"ContainerDied","Data":"0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.768288 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.789641 4746 generic.go:334] "Generic (PLEG): container finished" podID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerID="bda6ef59fe6c6aa36650accec8c47f1fb248a6d1176f6aed98b5503facb4cdb6" exitCode=143 Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.789685 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654869dd86-s9th4" event={"ID":"33cf45d3-8c95-4453-9a1e-46ad14bce822","Type":"ContainerDied","Data":"bda6ef59fe6c6aa36650accec8c47f1fb248a6d1176f6aed98b5503facb4cdb6"} Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.829801 4746 scope.go:117] "RemoveContainer" containerID="3126dbd2cf50d8c8e7a9683b6da26e8324aad907d712924fe7acecce195f923a" Jan 29 16:59:53 crc kubenswrapper[4746]: I0129 16:59:53.996728 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7cd65c77b7-kbbjb"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.022509 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-7cd65c77b7-kbbjb"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.049440 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.056615 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.102825 4746 scope.go:117] "RemoveContainer" containerID="cd7bdfeaf74aad9a30010243d284207e43e919190849f1ae0003a99378ba6011" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.103008 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-2bgxn"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.114454 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-2bgxn"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.120859 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.135805 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.148482 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.163325 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.183545 4746 scope.go:117] "RemoveContainer" containerID="144102022a25ffa5da982493e8ec0cbb1eba8a52241bd3edb3be370715f9d1f1" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.195304 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.195358 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data podName:71c96526-7c37-42c2-896e-b551dd6ed5b8 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:58.195344722 +0000 UTC m=+1520.595929366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data") pod "rabbitmq-server-0" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8") : configmap "rabbitmq-config-data" not found Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.227941 4746 scope.go:117] "RemoveContainer" containerID="c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.229129 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.272261 4746 scope.go:117] "RemoveContainer" containerID="c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.272814 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d\": container with ID starting with c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d not found: ID does not exist" containerID="c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.272846 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d"} err="failed to get container status \"c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d\": rpc error: code = NotFound desc = could not find container \"c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d\": container with ID starting with c5702455d1d3b2877b59692fb27ac3eef1d4ca8bdff0d929c20aa6fa937daa5d not found: ID does not exist" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.272871 4746 scope.go:117] "RemoveContainer" containerID="8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.291942 4746 scope.go:117] "RemoveContainer" containerID="64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.392727 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.393157 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-central-agent" containerID="cri-o://b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.393400 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="proxy-httpd" containerID="cri-o://66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.393437 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-notification-agent" containerID="cri-o://304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.393492 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="sg-core" containerID="cri-o://4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.398690 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data-custom\") pod \"c530fa14-8291-45d6-800c-54fd9716fa1d\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.398730 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-combined-ca-bundle\") pod \"c530fa14-8291-45d6-800c-54fd9716fa1d\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.398832 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvtsv\" (UniqueName: \"kubernetes.io/projected/c530fa14-8291-45d6-800c-54fd9716fa1d-kube-api-access-nvtsv\") pod \"c530fa14-8291-45d6-800c-54fd9716fa1d\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.398852 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c530fa14-8291-45d6-800c-54fd9716fa1d-logs\") pod \"c530fa14-8291-45d6-800c-54fd9716fa1d\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.398875 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data\") pod \"c530fa14-8291-45d6-800c-54fd9716fa1d\" (UID: \"c530fa14-8291-45d6-800c-54fd9716fa1d\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.404606 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c530fa14-8291-45d6-800c-54fd9716fa1d-logs" (OuterVolumeSpecName: "logs") pod "c530fa14-8291-45d6-800c-54fd9716fa1d" (UID: "c530fa14-8291-45d6-800c-54fd9716fa1d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.406036 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.406242 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6f8f1a81-ca32-4335-be69-a9159ede91fa" containerName="kube-state-metrics" containerID="cri-o://614c1528dcb502faea4895d6443017d7e52a267fbfb970de1158d60f296102fb" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.410234 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c530fa14-8291-45d6-800c-54fd9716fa1d" (UID: "c530fa14-8291-45d6-800c-54fd9716fa1d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.411546 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c530fa14-8291-45d6-800c-54fd9716fa1d-kube-api-access-nvtsv" (OuterVolumeSpecName: "kube-api-access-nvtsv") pod "c530fa14-8291-45d6-800c-54fd9716fa1d" (UID: "c530fa14-8291-45d6-800c-54fd9716fa1d"). InnerVolumeSpecName "kube-api-access-nvtsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.450468 4746 scope.go:117] "RemoveContainer" containerID="8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.452627 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871\": container with ID starting with 8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871 not found: ID does not exist" containerID="8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.452669 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871"} err="failed to get container status \"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871\": rpc error: code = NotFound desc = could not find container \"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871\": container with ID starting with 8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871 not found: ID does not exist" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.452696 4746 scope.go:117] "RemoveContainer" containerID="64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.463623 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="6f8f1a81-ca32-4335-be69-a9159ede91fa" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.201:8081/readyz\": EOF" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.467851 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09\": container with ID starting with 64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09 not found: ID does not exist" containerID="64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.467901 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09"} err="failed to get container status \"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09\": rpc error: code = NotFound desc = could not find container \"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09\": container with ID starting with 64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09 not found: ID does not exist" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.467935 4746 scope.go:117] "RemoveContainer" containerID="8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.479577 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871"} err="failed to get container status \"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871\": rpc error: code = NotFound desc = could not find container \"8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871\": container with ID starting with 8d72a1dea4f868c2408837ecc20c1905345cac5501aa2436c7f6a045d7c24871 not found: ID does not exist" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.479679 4746 scope.go:117] "RemoveContainer" containerID="64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.480145 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0185f119-b92f-4f05-9d0d-f0b2e081c331" path="/var/lib/kubelet/pods/0185f119-b92f-4f05-9d0d-f0b2e081c331/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.481020 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09"} err="failed to get container status \"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09\": rpc error: code = NotFound desc = could not find container \"64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09\": container with ID starting with 64c1fe8f6987593e5f62a3ce46e0685c315db52224835949501e4dae35fbdf09 not found: ID does not exist" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.481043 4746 scope.go:117] "RemoveContainer" containerID="3c2e531058476f465ac3dbbb01033f0d27b609383659cd5f42cf8efcfad81000" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.493985 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" path="/var/lib/kubelet/pods/56be91c6-da82-45b5-9b98-d5b6f05f244e/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.494952 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" path="/var/lib/kubelet/pods/5c0d701f-0ba6-4836-b3f9-1425b411d80d/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.495526 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" path="/var/lib/kubelet/pods/8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.500468 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" path="/var/lib/kubelet/pods/9d0831ca-9258-426a-b0d5-9ae88e24daa2/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.500735 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvtsv\" (UniqueName: \"kubernetes.io/projected/c530fa14-8291-45d6-800c-54fd9716fa1d-kube-api-access-nvtsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.500824 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c530fa14-8291-45d6-800c-54fd9716fa1d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.500899 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.502337 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" path="/var/lib/kubelet/pods/b4555f5c-9440-4402-96f9-e2bf40c5cfb1/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.503036 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf970538-d73e-48d7-b242-081dd3eacf7d" path="/var/lib/kubelet/pods/bf970538-d73e-48d7-b242-081dd3eacf7d/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.511578 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" path="/var/lib/kubelet/pods/d1205318-995d-4d3f-8c94-4faab5e1e48a/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.517774 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" path="/var/lib/kubelet/pods/fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac/volumes" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.518521 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c530fa14-8291-45d6-800c-54fd9716fa1d" (UID: "c530fa14-8291-45d6-800c-54fd9716fa1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.604108 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data" (OuterVolumeSpecName: "config-data") pod "c530fa14-8291-45d6-800c-54fd9716fa1d" (UID: "c530fa14-8291-45d6-800c-54fd9716fa1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.604465 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.714077 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c530fa14-8291-45d6-800c-54fd9716fa1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.721809 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d95d-account-create-update-7v4ph"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.721844 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.721859 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d95d-account-create-update-7v4ph"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.721873 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d95d-account-create-update-vh8v8"] Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722205 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerName="dnsmasq-dns" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722216 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerName="dnsmasq-dns" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722232 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722239 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722247 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-server" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722253 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-server" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722265 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722270 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722277 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerName="init" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722283 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerName="init" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722299 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722306 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722316 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener-log" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722323 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener-log" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722331 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="ovsdbserver-nb" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722336 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="ovsdbserver-nb" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722346 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerName="ovn-controller" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722352 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerName="ovn-controller" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722366 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-httpd" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722372 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-httpd" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722382 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722388 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722400 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="ovsdbserver-sb" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722407 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="ovsdbserver-sb" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.722416 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722422 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722575 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c37ca07-37b6-4fa9-9f8f-eca8bd8eb3df" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722590 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722600 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="ovsdbserver-nb" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722615 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722623 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4555f5c-9440-4402-96f9-e2bf40c5cfb1" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722637 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1205318-995d-4d3f-8c94-4faab5e1e48a" containerName="dnsmasq-dns" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722647 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-httpd" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722654 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0d701f-0ba6-4836-b3f9-1425b411d80d" containerName="proxy-server" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722661 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerName="barbican-keystone-listener-log" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722669 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="56be91c6-da82-45b5-9b98-d5b6f05f244e" containerName="ovsdbserver-sb" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722676 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb4fb05e-ea5c-4327-a76b-1d8c1e3ba9ac" containerName="openstack-network-exporter" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.722684 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0831ca-9258-426a-b0d5-9ae88e24daa2" containerName="ovn-controller" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723155 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d95d-account-create-update-vh8v8"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723173 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-tql25"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723183 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mmllj"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723207 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mmllj"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723221 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-tql25"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723288 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.723379 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" containerName="memcached" containerID="cri-o://97ce90f5b14d69f5966c8a456653fa79fce41aed308c4ece923536d92ee0a358" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.729871 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.736449 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6f4c9c876f-dbjbj"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.740696 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6f4c9c876f-dbjbj" podUID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" containerName="keystone-api" containerID="cri-o://7d91e45479b9bc92a37b60229bed29f47cbec6a2f001ef73702b8bf9cbd0a8be" gracePeriod=30 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.750805 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.780340 4746 scope.go:117] "RemoveContainer" containerID="853e911fd31bf2ffa128720b2f51df9779a9f20caf3e03f65b396b4a560ba830" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.782285 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.782956 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7d9fr"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.814853 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-config-data\") pod \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.815075 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-combined-ca-bundle\") pod \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.815103 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg44n\" (UniqueName: \"kubernetes.io/projected/2cbc4caa-43b8-42c2-83ae-e2448dda745f-kube-api-access-jg44n\") pod \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\" (UID: \"2cbc4caa-43b8-42c2-83ae-e2448dda745f\") " Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.815355 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.815437 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhg2\" (UniqueName: \"kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.821587 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbc4caa-43b8-42c2-83ae-e2448dda745f-kube-api-access-jg44n" (OuterVolumeSpecName: "kube-api-access-jg44n") pod "2cbc4caa-43b8-42c2-83ae-e2448dda745f" (UID: "2cbc4caa-43b8-42c2-83ae-e2448dda745f"). InnerVolumeSpecName "kube-api-access-jg44n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.836829 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lh2nd"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.842796 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-config-data" (OuterVolumeSpecName: "config-data") pod "2cbc4caa-43b8-42c2-83ae-e2448dda745f" (UID: "2cbc4caa-43b8-42c2-83ae-e2448dda745f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.850756 4746 generic.go:334] "Generic (PLEG): container finished" podID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerID="4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb" exitCode=2 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.850997 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerDied","Data":"4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.851641 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lh2nd"] Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.863512 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cbc4caa-43b8-42c2-83ae-e2448dda745f" (UID: "2cbc4caa-43b8-42c2-83ae-e2448dda745f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.863843 4746 generic.go:334] "Generic (PLEG): container finished" podID="6f8f1a81-ca32-4335-be69-a9159ede91fa" containerID="614c1528dcb502faea4895d6443017d7e52a267fbfb970de1158d60f296102fb" exitCode=2 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.863865 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f8f1a81-ca32-4335-be69-a9159ede91fa","Type":"ContainerDied","Data":"614c1528dcb502faea4895d6443017d7e52a267fbfb970de1158d60f296102fb"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.884698 4746 generic.go:334] "Generic (PLEG): container finished" podID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" exitCode=0 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.884779 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2cbc4caa-43b8-42c2-83ae-e2448dda745f","Type":"ContainerDied","Data":"35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.884803 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2cbc4caa-43b8-42c2-83ae-e2448dda745f","Type":"ContainerDied","Data":"79a3841462637483f641720d70a1a5a9eaca2ce3661ebb0be92c29db0c754c5d"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.884814 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d95d-account-create-update-vh8v8"] Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.885616 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nhhg2 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-d95d-account-create-update-vh8v8" podUID="1de36f4e-a20f-421c-a50f-8dc90f3c01a5" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.885690 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.899369 4746 generic.go:334] "Generic (PLEG): container finished" podID="c530fa14-8291-45d6-800c-54fd9716fa1d" containerID="2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b" exitCode=0 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.899498 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.901723 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" event={"ID":"c530fa14-8291-45d6-800c-54fd9716fa1d","Type":"ContainerDied","Data":"2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.901766 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64978c9b7d-d9wgb" event={"ID":"c530fa14-8291-45d6-800c-54fd9716fa1d","Type":"ContainerDied","Data":"89730ddfe40b875f34a08fd2a9731c17ac94456d01411922fd29e512fde52519"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.907368 4746 generic.go:334] "Generic (PLEG): container finished" podID="a5213774-9475-450b-a26f-2212d807c39f" containerID="cc32a899c7d388e4214b155041db0c6e217f8e04fb43bb99bba408d5f272016d" exitCode=1 Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.907403 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7d9fr" event={"ID":"a5213774-9475-450b-a26f-2212d807c39f","Type":"ContainerDied","Data":"cc32a899c7d388e4214b155041db0c6e217f8e04fb43bb99bba408d5f272016d"} Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.907918 4746 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-7d9fr" secret="" err="secret \"galera-openstack-dockercfg-29p5x\" not found" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.907969 4746 scope.go:117] "RemoveContainer" containerID="cc32a899c7d388e4214b155041db0c6e217f8e04fb43bb99bba408d5f272016d" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.908414 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-7d9fr_openstack(a5213774-9475-450b-a26f-2212d807c39f)\"" pod="openstack/root-account-create-update-7d9fr" podUID="a5213774-9475-450b-a26f-2212d807c39f" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.923614 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhg2\" (UniqueName: \"kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.924025 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.924236 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.924255 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg44n\" (UniqueName: \"kubernetes.io/projected/2cbc4caa-43b8-42c2-83ae-e2448dda745f-kube-api-access-jg44n\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: I0129 16:59:54.924266 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc4caa-43b8-42c2-83ae-e2448dda745f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.924340 4746 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.924398 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts podName:1de36f4e-a20f-421c-a50f-8dc90f3c01a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:55.424382578 +0000 UTC m=+1517.824967222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts") pod "keystone-d95d-account-create-update-vh8v8" (UID: "1de36f4e-a20f-421c-a50f-8dc90f3c01a5") : configmap "openstack-scripts" not found Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.949702 4746 projected.go:194] Error preparing data for projected volume kube-api-access-nhhg2 for pod openstack/keystone-d95d-account-create-update-vh8v8: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:59:54 crc kubenswrapper[4746]: E0129 16:59:54.949797 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2 podName:1de36f4e-a20f-421c-a50f-8dc90f3c01a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:55.449772193 +0000 UTC m=+1517.850356847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nhhg2" (UniqueName: "kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2") pod "keystone-d95d-account-create-update-vh8v8" (UID: "1de36f4e-a20f-421c-a50f-8dc90f3c01a5") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.027683 4746 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.028440 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts podName:a5213774-9475-450b-a26f-2212d807c39f nodeName:}" failed. No retries permitted until 2026-01-29 16:59:55.528417908 +0000 UTC m=+1517.929002552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts") pod "root-account-create-update-7d9fr" (UID: "a5213774-9475-450b-a26f-2212d807c39f") : configmap "openstack-scripts" not found Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.089097 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="galera" containerID="cri-o://0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c" gracePeriod=30 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.118527 4746 scope.go:117] "RemoveContainer" containerID="e4ad991870d64b906f98b966d5a79dd93b5367dac6510d8b9f9b6e56123d442e" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.169029 4746 scope.go:117] "RemoveContainer" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.170814 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.176264 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.183878 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.190933 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-64978c9b7d-d9wgb"] Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.209824 4746 scope.go:117] "RemoveContainer" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.210895 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c\": container with ID starting with 35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c not found: ID does not exist" containerID="35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.210927 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c"} err="failed to get container status \"35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c\": rpc error: code = NotFound desc = could not find container \"35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c\": container with ID starting with 35ccbe7c44a7d00a1e226b8168e52b9ae8d85052dcef3d8303ed817213e9093c not found: ID does not exist" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.210947 4746 scope.go:117] "RemoveContainer" containerID="2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.213651 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-64978c9b7d-d9wgb"] Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.229612 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x82d\" (UniqueName: \"kubernetes.io/projected/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-api-access-7x82d\") pod \"6f8f1a81-ca32-4335-be69-a9159ede91fa\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.229717 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-combined-ca-bundle\") pod \"6f8f1a81-ca32-4335-be69-a9159ede91fa\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.229774 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-certs\") pod \"6f8f1a81-ca32-4335-be69-a9159ede91fa\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.229903 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-config\") pod \"6f8f1a81-ca32-4335-be69-a9159ede91fa\" (UID: \"6f8f1a81-ca32-4335-be69-a9159ede91fa\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.246399 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-api-access-7x82d" (OuterVolumeSpecName: "kube-api-access-7x82d") pod "6f8f1a81-ca32-4335-be69-a9159ede91fa" (UID: "6f8f1a81-ca32-4335-be69-a9159ede91fa"). InnerVolumeSpecName "kube-api-access-7x82d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.260134 4746 scope.go:117] "RemoveContainer" containerID="c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.262440 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f8f1a81-ca32-4335-be69-a9159ede91fa" (UID: "6f8f1a81-ca32-4335-be69-a9159ede91fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.284117 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6f8f1a81-ca32-4335-be69-a9159ede91fa" (UID: "6f8f1a81-ca32-4335-be69-a9159ede91fa"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.304668 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6f8f1a81-ca32-4335-be69-a9159ede91fa" (UID: "6f8f1a81-ca32-4335-be69-a9159ede91fa"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.310420 4746 scope.go:117] "RemoveContainer" containerID="2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b" Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.310802 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b\": container with ID starting with 2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b not found: ID does not exist" containerID="2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.310840 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b"} err="failed to get container status \"2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b\": rpc error: code = NotFound desc = could not find container \"2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b\": container with ID starting with 2ec04b5a058cfbe3709e8d03b4d9ffd5b6a332aa9de3b408f4c3b990aae91f8b not found: ID does not exist" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.310859 4746 scope.go:117] "RemoveContainer" containerID="c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0" Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.311322 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0\": container with ID starting with c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0 not found: ID does not exist" containerID="c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.311354 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0"} err="failed to get container status \"c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0\": rpc error: code = NotFound desc = could not find container \"c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0\": container with ID starting with c6d6afbd807e589d59bdcd0ddd441b901bb4f4d42cba5c516dd5d446addcc0b0 not found: ID does not exist" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.311376 4746 scope.go:117] "RemoveContainer" containerID="4dfe489e2b680e7d9a9284cb8542834e8149f37a867c50860143e6bb9632324b" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.328412 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.332537 4746 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.332572 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x82d\" (UniqueName: \"kubernetes.io/projected/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-api-access-7x82d\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.332587 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.332598 4746 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f8f1a81-ca32-4335-be69-a9159ede91fa-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433312 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-scripts\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433389 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mx48\" (UniqueName: \"kubernetes.io/projected/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-kube-api-access-7mx48\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433481 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-combined-ca-bundle\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433592 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-config-data\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433685 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-public-tls-certs\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433741 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-internal-tls-certs\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.433778 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-logs\") pod \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\" (UID: \"9db12a59-b8e4-43e4-add4-9cb361cfe6c5\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.434621 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.434765 4746 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.434827 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts podName:1de36f4e-a20f-421c-a50f-8dc90f3c01a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:56.434808784 +0000 UTC m=+1518.835393438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts") pod "keystone-d95d-account-create-update-vh8v8" (UID: "1de36f4e-a20f-421c-a50f-8dc90f3c01a5") : configmap "openstack-scripts" not found Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.438574 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-scripts" (OuterVolumeSpecName: "scripts") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.439210 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-logs" (OuterVolumeSpecName: "logs") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.440357 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-kube-api-access-7mx48" (OuterVolumeSpecName: "kube-api-access-7mx48") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "kube-api-access-7mx48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.492168 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-config-data" (OuterVolumeSpecName: "config-data") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.536082 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhg2\" (UniqueName: \"kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.536345 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.536360 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.536367 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.536376 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mx48\" (UniqueName: \"kubernetes.io/projected/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-kube-api-access-7mx48\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.541023 4746 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.541100 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts podName:a5213774-9475-450b-a26f-2212d807c39f nodeName:}" failed. No retries permitted until 2026-01-29 16:59:56.541081126 +0000 UTC m=+1518.941665770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts") pod "root-account-create-update-7d9fr" (UID: "a5213774-9475-450b-a26f-2212d807c39f") : configmap "openstack-scripts" not found Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.548037 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-654869dd86-s9th4" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.548045 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-654869dd86-s9th4" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.548375 4746 projected.go:194] Error preparing data for projected volume kube-api-access-nhhg2 for pod openstack/keystone-d95d-account-create-update-vh8v8: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:59:55 crc kubenswrapper[4746]: E0129 16:59:55.548540 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2 podName:1de36f4e-a20f-421c-a50f-8dc90f3c01a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:56.54852378 +0000 UTC m=+1518.949108484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nhhg2" (UniqueName: "kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2") pod "keystone-d95d-account-create-update-vh8v8" (UID: "1de36f4e-a20f-421c-a50f-8dc90f3c01a5") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.572144 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.581872 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.595039 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9db12a59-b8e4-43e4-add4-9cb361cfe6c5" (UID: "9db12a59-b8e4-43e4-add4-9cb361cfe6c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.653690 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.653728 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.653741 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db12a59-b8e4-43e4-add4-9cb361cfe6c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.667174 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.733765 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.172:8776/healthcheck\": dial tcp 10.217.0.172:8776: connect: connection refused" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.754899 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-config-data\") pod \"abd6dc02-1269-43b8-a1aa-d239875e4902\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.755153 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-nova-metadata-tls-certs\") pod \"abd6dc02-1269-43b8-a1aa-d239875e4902\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.755205 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw9gw\" (UniqueName: \"kubernetes.io/projected/abd6dc02-1269-43b8-a1aa-d239875e4902-kube-api-access-gw9gw\") pod \"abd6dc02-1269-43b8-a1aa-d239875e4902\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.755260 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-combined-ca-bundle\") pod \"abd6dc02-1269-43b8-a1aa-d239875e4902\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.755381 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd6dc02-1269-43b8-a1aa-d239875e4902-logs\") pod \"abd6dc02-1269-43b8-a1aa-d239875e4902\" (UID: \"abd6dc02-1269-43b8-a1aa-d239875e4902\") " Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.755996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abd6dc02-1269-43b8-a1aa-d239875e4902-logs" (OuterVolumeSpecName: "logs") pod "abd6dc02-1269-43b8-a1aa-d239875e4902" (UID: "abd6dc02-1269-43b8-a1aa-d239875e4902"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.756288 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abd6dc02-1269-43b8-a1aa-d239875e4902-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.760685 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd6dc02-1269-43b8-a1aa-d239875e4902-kube-api-access-gw9gw" (OuterVolumeSpecName: "kube-api-access-gw9gw") pod "abd6dc02-1269-43b8-a1aa-d239875e4902" (UID: "abd6dc02-1269-43b8-a1aa-d239875e4902"). InnerVolumeSpecName "kube-api-access-gw9gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.794102 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-config-data" (OuterVolumeSpecName: "config-data") pod "abd6dc02-1269-43b8-a1aa-d239875e4902" (UID: "abd6dc02-1269-43b8-a1aa-d239875e4902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.807046 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abd6dc02-1269-43b8-a1aa-d239875e4902" (UID: "abd6dc02-1269-43b8-a1aa-d239875e4902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.824564 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "abd6dc02-1269-43b8-a1aa-d239875e4902" (UID: "abd6dc02-1269-43b8-a1aa-d239875e4902"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.858774 4746 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.858814 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw9gw\" (UniqueName: \"kubernetes.io/projected/abd6dc02-1269-43b8-a1aa-d239875e4902-kube-api-access-gw9gw\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.858827 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.858839 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abd6dc02-1269-43b8-a1aa-d239875e4902-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.927607 4746 generic.go:334] "Generic (PLEG): container finished" podID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerID="90d0f7c0ec8bee68f1032e1115bb3957e1cc29de95dedf8075f362d0b3ca5802" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.927789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cfbdab6-8b6e-4199-808c-be07e373df64","Type":"ContainerDied","Data":"90d0f7c0ec8bee68f1032e1115bb3957e1cc29de95dedf8075f362d0b3ca5802"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.928084 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4cfbdab6-8b6e-4199-808c-be07e373df64","Type":"ContainerDied","Data":"b386eedd126608892d8c0b9d82de86afde17c10844e725e83a51e28b70fabec9"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.928101 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b386eedd126608892d8c0b9d82de86afde17c10844e725e83a51e28b70fabec9" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.935814 4746 generic.go:334] "Generic (PLEG): container finished" podID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerID="08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.935881 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7cbf56d-9h4gg" event={"ID":"9db12a59-b8e4-43e4-add4-9cb361cfe6c5","Type":"ContainerDied","Data":"08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.935942 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7cbf56d-9h4gg" event={"ID":"9db12a59-b8e4-43e4-add4-9cb361cfe6c5","Type":"ContainerDied","Data":"d910aff6b85b1c406735298c4da2d7c5f8e6eb9d7aff652e523488a0dfe5e25b"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.935962 4746 scope.go:117] "RemoveContainer" containerID="08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.935906 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b7cbf56d-9h4gg" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.948717 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.960786 4746 generic.go:334] "Generic (PLEG): container finished" podID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerID="66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.960824 4746 generic.go:334] "Generic (PLEG): container finished" podID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerID="b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.960848 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerDied","Data":"66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.960896 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerDied","Data":"b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.963142 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.973230 4746 generic.go:334] "Generic (PLEG): container finished" podID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerID="c78b0cc4c733ab33d81ae04bcb4447f430f04b3f564487ae982eadf1d345566d" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.973293 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654869dd86-s9th4" event={"ID":"33cf45d3-8c95-4453-9a1e-46ad14bce822","Type":"ContainerDied","Data":"c78b0cc4c733ab33d81ae04bcb4447f430f04b3f564487ae982eadf1d345566d"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.973321 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-654869dd86-s9th4" event={"ID":"33cf45d3-8c95-4453-9a1e-46ad14bce822","Type":"ContainerDied","Data":"4aa80619b2a8a1702d85e7b84d3ace10e4a40f4cae664150889194b0b26b5325"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.973331 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa80619b2a8a1702d85e7b84d3ace10e4a40f4cae664150889194b0b26b5325" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.973490 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.975487 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.978625 4746 generic.go:334] "Generic (PLEG): container finished" podID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerID="34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5" exitCode=0 Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.978667 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abd6dc02-1269-43b8-a1aa-d239875e4902","Type":"ContainerDied","Data":"34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.978685 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"abd6dc02-1269-43b8-a1aa-d239875e4902","Type":"ContainerDied","Data":"0d727a66ef596a29971b4cbef16439a48124e1c54cea78b233d81747f12d9aa1"} Jan 29 16:59:55 crc kubenswrapper[4746]: I0129 16:59:55.978726 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.004566 4746 generic.go:334] "Generic (PLEG): container finished" podID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerID="cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228" exitCode=0 Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.004650 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6","Type":"ContainerDied","Data":"cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228"} Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.004697 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6","Type":"ContainerDied","Data":"d32d646933004e53aeb0b513f795313956ca77d247f42afe7d3d85e193ae1d3f"} Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.004776 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.006753 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.009675 4746 generic.go:334] "Generic (PLEG): container finished" podID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerID="1a56f205c5cc1a3a3d1140e62eef702ab9a791c26bc2dc47a9b8cf3218933c17" exitCode=0 Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.009726 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf727c52-99b6-4ab8-9815-4ab8c2dd5050","Type":"ContainerDied","Data":"1a56f205c5cc1a3a3d1140e62eef702ab9a791c26bc2dc47a9b8cf3218933c17"} Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.014035 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9b7cbf56d-9h4gg"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.018477 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.019156 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f8f1a81-ca32-4335-be69-a9159ede91fa","Type":"ContainerDied","Data":"62ca78d227bfa84aab7b5a6c9a7c08f59dbba90f42b773f8bc3e2e620e00c7c6"} Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.021137 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9b7cbf56d-9h4gg"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.022542 4746 generic.go:334] "Generic (PLEG): container finished" podID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerID="f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d" exitCode=0 Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.022623 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.023052 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.023217 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f93a42f7-a972-44c2-a2a4-5f698ba4caf7","Type":"ContainerDied","Data":"f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d"} Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.023241 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f93a42f7-a972-44c2-a2a4-5f698ba4caf7","Type":"ContainerDied","Data":"4bec86215d0f7d79211bca00ac121bb4fa38755b694cd86890ca813ade593f54"} Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.039147 4746 scope.go:117] "RemoveContainer" containerID="fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.040527 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.062943 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-internal-tls-certs\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068250 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-httpd-run\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-scripts\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068446 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-internal-tls-certs\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068541 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-logs\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068612 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzhbr\" (UniqueName: \"kubernetes.io/projected/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-kube-api-access-lzhbr\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068698 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33cf45d3-8c95-4453-9a1e-46ad14bce822-logs\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068760 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-config-data\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-combined-ca-bundle\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068903 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbdab6-8b6e-4199-808c-be07e373df64-logs\") pod \"4cfbdab6-8b6e-4199-808c-be07e373df64\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.068985 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-combined-ca-bundle\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.069066 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-config-data\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.069146 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-combined-ca-bundle\") pod \"4cfbdab6-8b6e-4199-808c-be07e373df64\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.069282 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9w8g\" (UniqueName: \"kubernetes.io/projected/33cf45d3-8c95-4453-9a1e-46ad14bce822-kube-api-access-x9w8g\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.070471 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxsfk\" (UniqueName: \"kubernetes.io/projected/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-kube-api-access-vxsfk\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.071337 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-httpd-run\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.071866 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-internal-tls-certs\") pod \"4cfbdab6-8b6e-4199-808c-be07e373df64\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073300 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data-custom\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073388 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\" (UID: \"f93a42f7-a972-44c2-a2a4-5f698ba4caf7\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073463 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqrb8\" (UniqueName: \"kubernetes.io/projected/4cfbdab6-8b6e-4199-808c-be07e373df64-kube-api-access-zqrb8\") pod \"4cfbdab6-8b6e-4199-808c-be07e373df64\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073549 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-public-tls-certs\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073617 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-public-tls-certs\") pod \"4cfbdab6-8b6e-4199-808c-be07e373df64\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073684 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-combined-ca-bundle\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073743 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data\") pod \"33cf45d3-8c95-4453-9a1e-46ad14bce822\" (UID: \"33cf45d3-8c95-4453-9a1e-46ad14bce822\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073887 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-logs\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.073967 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-config-data\") pod \"4cfbdab6-8b6e-4199-808c-be07e373df64\" (UID: \"4cfbdab6-8b6e-4199-808c-be07e373df64\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.074032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-scripts\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.074096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-public-tls-certs\") pod \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\" (UID: \"4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.069988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cf45d3-8c95-4453-9a1e-46ad14bce822-logs" (OuterVolumeSpecName: "logs") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.069983 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cfbdab6-8b6e-4199-808c-be07e373df64-logs" (OuterVolumeSpecName: "logs") pod "4cfbdab6-8b6e-4199-808c-be07e373df64" (UID: "4cfbdab6-8b6e-4199-808c-be07e373df64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.070203 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-logs" (OuterVolumeSpecName: "logs") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.071623 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.087939 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-logs" (OuterVolumeSpecName: "logs") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.088062 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-scripts" (OuterVolumeSpecName: "scripts") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.095453 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.095975 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfbdab6-8b6e-4199-808c-be07e373df64-kube-api-access-zqrb8" (OuterVolumeSpecName: "kube-api-access-zqrb8") pod "4cfbdab6-8b6e-4199-808c-be07e373df64" (UID: "4cfbdab6-8b6e-4199-808c-be07e373df64"). InnerVolumeSpecName "kube-api-access-zqrb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.110370 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.111332 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.111356 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-scripts" (OuterVolumeSpecName: "scripts") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.111591 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-kube-api-access-vxsfk" (OuterVolumeSpecName: "kube-api-access-vxsfk") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "kube-api-access-vxsfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.111772 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.111893 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-kube-api-access-lzhbr" (OuterVolumeSpecName: "kube-api-access-lzhbr") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "kube-api-access-lzhbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.120153 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cf45d3-8c95-4453-9a1e-46ad14bce822-kube-api-access-x9w8g" (OuterVolumeSpecName: "kube-api-access-x9w8g") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "kube-api-access-x9w8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.126268 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.131695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.135482 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.148870 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.156300 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.159423 4746 scope.go:117] "RemoveContainer" containerID="08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.159864 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718\": container with ID starting with 08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718 not found: ID does not exist" containerID="08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.159903 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718"} err="failed to get container status \"08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718\": rpc error: code = NotFound desc = could not find container \"08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718\": container with ID starting with 08284df11dc176a3325ad8093efe32d03d9769ac5e1e97899901dd3884e37718 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.159922 4746 scope.go:117] "RemoveContainer" containerID="fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.160113 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7\": container with ID starting with fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7 not found: ID does not exist" containerID="fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.160131 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7"} err="failed to get container status \"fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7\": rpc error: code = NotFound desc = could not find container \"fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7\": container with ID starting with fd8e03ad7a3e292877b6e368ce8d23ccc2a346793d4eb7edcd7f186f691c62e7 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.160143 4746 scope.go:117] "RemoveContainer" containerID="34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.163469 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.169297 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv94l\" (UniqueName: \"kubernetes.io/projected/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-kube-api-access-tv94l\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176473 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176507 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-public-tls-certs\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176526 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data-custom\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176613 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-logs\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176680 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-scripts\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176701 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-etc-machine-id\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176727 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-internal-tls-certs\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.176749 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-combined-ca-bundle\") pod \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\" (UID: \"cf727c52-99b6-4ab8-9815-4ab8c2dd5050\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177220 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33cf45d3-8c95-4453-9a1e-46ad14bce822-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177236 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177246 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4cfbdab6-8b6e-4199-808c-be07e373df64-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177255 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177270 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9w8g\" (UniqueName: \"kubernetes.io/projected/33cf45d3-8c95-4453-9a1e-46ad14bce822-kube-api-access-x9w8g\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177289 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxsfk\" (UniqueName: \"kubernetes.io/projected/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-kube-api-access-vxsfk\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177300 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177309 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177332 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177341 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqrb8\" (UniqueName: \"kubernetes.io/projected/4cfbdab6-8b6e-4199-808c-be07e373df64-kube-api-access-zqrb8\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177349 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177362 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177371 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177378 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177386 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177393 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177403 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.177414 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzhbr\" (UniqueName: \"kubernetes.io/projected/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-kube-api-access-lzhbr\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.178758 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-logs" (OuterVolumeSpecName: "logs") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.182294 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.188711 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-scripts" (OuterVolumeSpecName: "scripts") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.197279 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-config-data" (OuterVolumeSpecName: "config-data") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.198715 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.200001 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.203688 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-kube-api-access-tv94l" (OuterVolumeSpecName: "kube-api-access-tv94l") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "kube-api-access-tv94l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.209572 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-config-data" (OuterVolumeSpecName: "config-data") pod "4cfbdab6-8b6e-4199-808c-be07e373df64" (UID: "4cfbdab6-8b6e-4199-808c-be07e373df64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.219146 4746 scope.go:117] "RemoveContainer" containerID="9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.221267 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.230298 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f93a42f7-a972-44c2-a2a4-5f698ba4caf7" (UID: "f93a42f7-a972-44c2-a2a4-5f698ba4caf7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.235700 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-config-data" (OuterVolumeSpecName: "config-data") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.236063 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.244794 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cfbdab6-8b6e-4199-808c-be07e373df64" (UID: "4cfbdab6-8b6e-4199-808c-be07e373df64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.247274 4746 scope.go:117] "RemoveContainer" containerID="34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.247889 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5\": container with ID starting with 34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5 not found: ID does not exist" containerID="34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.247956 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5"} err="failed to get container status \"34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5\": rpc error: code = NotFound desc = could not find container \"34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5\": container with ID starting with 34fd1422d0e06c10b1e2758cf65f2ab78d83983338deada3fdc4a64a464ed4c5 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.247995 4746 scope.go:117] "RemoveContainer" containerID="9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.249820 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55\": container with ID starting with 9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55 not found: ID does not exist" containerID="9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.249855 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55"} err="failed to get container status \"9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55\": rpc error: code = NotFound desc = could not find container \"9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55\": container with ID starting with 9e1fdedf72fca1ec8c930b9d2c156845c46d58c6fe33a1da776befc63c49dc55 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.249879 4746 scope.go:117] "RemoveContainer" containerID="cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.250403 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4cfbdab6-8b6e-4199-808c-be07e373df64" (UID: "4cfbdab6-8b6e-4199-808c-be07e373df64"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.263299 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283230 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283256 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283266 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283276 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283290 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283388 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283401 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283412 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv94l\" (UniqueName: \"kubernetes.io/projected/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-kube-api-access-tv94l\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283425 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f93a42f7-a972-44c2-a2a4-5f698ba4caf7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283435 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283443 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283454 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283464 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283472 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.283481 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.292738 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data" (OuterVolumeSpecName: "config-data") pod "33cf45d3-8c95-4453-9a1e-46ad14bce822" (UID: "33cf45d3-8c95-4453-9a1e-46ad14bce822"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.298951 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.304110 4746 scope.go:117] "RemoveContainer" containerID="d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.304172 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.308865 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4cfbdab6-8b6e-4199-808c-be07e373df64" (UID: "4cfbdab6-8b6e-4199-808c-be07e373df64"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.321718 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.323309 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" (UID: "4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.332482 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data" (OuterVolumeSpecName: "config-data") pod "cf727c52-99b6-4ab8-9815-4ab8c2dd5050" (UID: "cf727c52-99b6-4ab8-9815-4ab8c2dd5050"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.374691 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389697 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389719 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389727 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cfbdab6-8b6e-4199-808c-be07e373df64-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389739 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389747 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cf45d3-8c95-4453-9a1e-46ad14bce822-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389755 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf727c52-99b6-4ab8-9815-4ab8c2dd5050-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.389765 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.393865 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.405935 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.425310 4746 scope.go:117] "RemoveContainer" containerID="cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.426130 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228\": container with ID starting with cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228 not found: ID does not exist" containerID="cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.426175 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228"} err="failed to get container status \"cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228\": rpc error: code = NotFound desc = could not find container \"cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228\": container with ID starting with cbafc67a005f324b12d37d67c71bcc25fc069d223169d40607070f5743449228 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.426223 4746 scope.go:117] "RemoveContainer" containerID="d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.426772 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5\": container with ID starting with d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5 not found: ID does not exist" containerID="d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.426794 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5"} err="failed to get container status \"d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5\": rpc error: code = NotFound desc = could not find container \"d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5\": container with ID starting with d0a97d8e40e8500f3e33365657f47910f0a7a61bdc8f5ae52576443201d1fce5 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.426808 4746 scope.go:117] "RemoveContainer" containerID="1a56f205c5cc1a3a3d1140e62eef702ab9a791c26bc2dc47a9b8cf3218933c17" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.451650 4746 scope.go:117] "RemoveContainer" containerID="14a457ada9ded8a131b71b82b1d68aaab179b4e4656c9f30a8ff693e5705512c" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.460199 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" path="/var/lib/kubelet/pods/2cbc4caa-43b8-42c2-83ae-e2448dda745f/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.460947 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8f1a81-ca32-4335-be69-a9159ede91fa" path="/var/lib/kubelet/pods/6f8f1a81-ca32-4335-be69-a9159ede91fa/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.461736 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" path="/var/lib/kubelet/pods/9db12a59-b8e4-43e4-add4-9cb361cfe6c5/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.462910 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" path="/var/lib/kubelet/pods/abd6dc02-1269-43b8-a1aa-d239875e4902/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.463451 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac094691-abb2-4295-bbe9-13b698b6b315" path="/var/lib/kubelet/pods/ac094691-abb2-4295-bbe9-13b698b6b315/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.463967 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c530fa14-8291-45d6-800c-54fd9716fa1d" path="/var/lib/kubelet/pods/c530fa14-8291-45d6-800c-54fd9716fa1d/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.464979 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4af9fe5-b4be-4952-97ec-60c8d00703e9" path="/var/lib/kubelet/pods/d4af9fe5-b4be-4952-97ec-60c8d00703e9/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.465465 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1fdba39-b67b-4ab6-af7d-c254d8f725e7" path="/var/lib/kubelet/pods/f1fdba39-b67b-4ab6-af7d-c254d8f725e7/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.465966 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22df875-c939-436a-9163-28b0d53bf10c" path="/var/lib/kubelet/pods/f22df875-c939-436a-9163-28b0d53bf10c/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.467348 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" path="/var/lib/kubelet/pods/f93a42f7-a972-44c2-a2a4-5f698ba4caf7/volumes" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.491257 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts\") pod \"a5213774-9475-450b-a26f-2212d807c39f\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.491568 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/a5213774-9475-450b-a26f-2212d807c39f-kube-api-access-nf629\") pod \"a5213774-9475-450b-a26f-2212d807c39f\" (UID: \"a5213774-9475-450b-a26f-2212d807c39f\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.491931 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5213774-9475-450b-a26f-2212d807c39f" (UID: "a5213774-9475-450b-a26f-2212d807c39f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.492724 4746 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.492764 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.492784 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts podName:1de36f4e-a20f-421c-a50f-8dc90f3c01a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:58.492764763 +0000 UTC m=+1520.893349497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts") pod "keystone-d95d-account-create-update-vh8v8" (UID: "1de36f4e-a20f-421c-a50f-8dc90f3c01a5") : configmap "openstack-scripts" not found Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.492925 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5213774-9475-450b-a26f-2212d807c39f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.495464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5213774-9475-450b-a26f-2212d807c39f-kube-api-access-nf629" (OuterVolumeSpecName: "kube-api-access-nf629") pod "a5213774-9475-450b-a26f-2212d807c39f" (UID: "a5213774-9475-450b-a26f-2212d807c39f"). InnerVolumeSpecName "kube-api-access-nf629". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.562790 4746 scope.go:117] "RemoveContainer" containerID="614c1528dcb502faea4895d6443017d7e52a267fbfb970de1158d60f296102fb" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.593952 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhg2\" (UniqueName: \"kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2\") pod \"keystone-d95d-account-create-update-vh8v8\" (UID: \"1de36f4e-a20f-421c-a50f-8dc90f3c01a5\") " pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.594025 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf629\" (UniqueName: \"kubernetes.io/projected/a5213774-9475-450b-a26f-2212d807c39f-kube-api-access-nf629\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.597829 4746 projected.go:194] Error preparing data for projected volume kube-api-access-nhhg2 for pod openstack/keystone-d95d-account-create-update-vh8v8: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.597904 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2 podName:1de36f4e-a20f-421c-a50f-8dc90f3c01a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:59:58.597884063 +0000 UTC m=+1520.998468707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nhhg2" (UniqueName: "kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2") pod "keystone-d95d-account-create-update-vh8v8" (UID: "1de36f4e-a20f-421c-a50f-8dc90f3c01a5") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.650253 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.654527 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34 is running failed: container process not found" containerID="251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.654793 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34 is running failed: container process not found" containerID="251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.655381 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34 is running failed: container process not found" containerID="251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.655406 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="ovn-northd" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.674382 4746 scope.go:117] "RemoveContainer" containerID="f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.684545 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.693749 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.722240 4746 scope.go:117] "RemoveContainer" containerID="61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.777027 4746 scope.go:117] "RemoveContainer" containerID="f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.777569 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d\": container with ID starting with f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d not found: ID does not exist" containerID="f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.777590 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d"} err="failed to get container status \"f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d\": rpc error: code = NotFound desc = could not find container \"f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d\": container with ID starting with f5c02ba7bcd09d61862ff4a1be2fd9ad92119bc5379b00303886cb903c7e677d not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.777610 4746 scope.go:117] "RemoveContainer" containerID="61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31" Jan 29 16:59:56 crc kubenswrapper[4746]: E0129 16:59:56.778032 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31\": container with ID starting with 61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31 not found: ID does not exist" containerID="61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.778057 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31"} err="failed to get container status \"61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31\": rpc error: code = NotFound desc = could not find container \"61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31\": container with ID starting with 61caa4321af6d713867ae7ac3d1c5616bd6fec70a2102379c561b7758975ab31 not found: ID does not exist" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.796782 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-scripts\") pod \"76935545-e8e3-4523-97b0-edce25c6756d\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.796816 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data-custom\") pod \"76935545-e8e3-4523-97b0-edce25c6756d\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.796887 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq8ck\" (UniqueName: \"kubernetes.io/projected/76935545-e8e3-4523-97b0-edce25c6756d-kube-api-access-fq8ck\") pod \"76935545-e8e3-4523-97b0-edce25c6756d\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.796920 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-combined-ca-bundle\") pod \"76935545-e8e3-4523-97b0-edce25c6756d\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.796962 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76935545-e8e3-4523-97b0-edce25c6756d-etc-machine-id\") pod \"76935545-e8e3-4523-97b0-edce25c6756d\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.796987 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data\") pod \"76935545-e8e3-4523-97b0-edce25c6756d\" (UID: \"76935545-e8e3-4523-97b0-edce25c6756d\") " Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.797932 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76935545-e8e3-4523-97b0-edce25c6756d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76935545-e8e3-4523-97b0-edce25c6756d" (UID: "76935545-e8e3-4523-97b0-edce25c6756d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.801077 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76935545-e8e3-4523-97b0-edce25c6756d" (UID: "76935545-e8e3-4523-97b0-edce25c6756d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.813082 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-scripts" (OuterVolumeSpecName: "scripts") pod "76935545-e8e3-4523-97b0-edce25c6756d" (UID: "76935545-e8e3-4523-97b0-edce25c6756d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.821161 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76935545-e8e3-4523-97b0-edce25c6756d-kube-api-access-fq8ck" (OuterVolumeSpecName: "kube-api-access-fq8ck") pod "76935545-e8e3-4523-97b0-edce25c6756d" (UID: "76935545-e8e3-4523-97b0-edce25c6756d"). InnerVolumeSpecName "kube-api-access-fq8ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.864278 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76935545-e8e3-4523-97b0-edce25c6756d" (UID: "76935545-e8e3-4523-97b0-edce25c6756d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.899149 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq8ck\" (UniqueName: \"kubernetes.io/projected/76935545-e8e3-4523-97b0-edce25c6756d-kube-api-access-fq8ck\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.899176 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.899201 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76935545-e8e3-4523-97b0-edce25c6756d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.899209 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.899221 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:56 crc kubenswrapper[4746]: I0129 16:59:56.903890 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data" (OuterVolumeSpecName: "config-data") pod "76935545-e8e3-4523-97b0-edce25c6756d" (UID: "76935545-e8e3-4523-97b0-edce25c6756d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.000689 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76935545-e8e3-4523-97b0-edce25c6756d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.049043 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cdeb76e4-0143-44ad-935d-eb486d6fa9dc/ovn-northd/0.log" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.049096 4746 generic.go:334] "Generic (PLEG): container finished" podID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerID="251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34" exitCode=139 Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.049171 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cdeb76e4-0143-44ad-935d-eb486d6fa9dc","Type":"ContainerDied","Data":"251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34"} Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.051560 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e is running failed: container process not found" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.053917 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e is running failed: container process not found" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.054297 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e is running failed: container process not found" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.054342 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerName="nova-scheduler-scheduler" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.054701 4746 generic.go:334] "Generic (PLEG): container finished" podID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" exitCode=0 Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.054750 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0","Type":"ContainerDied","Data":"905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e"} Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.057790 4746 generic.go:334] "Generic (PLEG): container finished" podID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" containerID="97ce90f5b14d69f5966c8a456653fa79fce41aed308c4ece923536d92ee0a358" exitCode=0 Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.057872 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b931fc5d-d5c3-429f-9c40-073a56aed3ba","Type":"ContainerDied","Data":"97ce90f5b14d69f5966c8a456653fa79fce41aed308c4ece923536d92ee0a358"} Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.064377 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cf727c52-99b6-4ab8-9815-4ab8c2dd5050","Type":"ContainerDied","Data":"802dddbc19775990d7e7c69801ae1105528a44dde62bf6d7beb4076db6d83716"} Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.064485 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.074539 4746 generic.go:334] "Generic (PLEG): container finished" podID="76935545-e8e3-4523-97b0-edce25c6756d" containerID="59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e" exitCode=0 Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.074605 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76935545-e8e3-4523-97b0-edce25c6756d","Type":"ContainerDied","Data":"59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e"} Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.074633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"76935545-e8e3-4523-97b0-edce25c6756d","Type":"ContainerDied","Data":"024b77fbb808564631fc6d6ae314b6138dbf3d73713f5f228cc4ac270729222f"} Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.074649 4746 scope.go:117] "RemoveContainer" containerID="0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.074745 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.084223 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.092410 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7d9fr" event={"ID":"a5213774-9475-450b-a26f-2212d807c39f","Type":"ContainerDied","Data":"3299a49c2bda8c3c664a734d50b79ebe8796b2c46837c3cabd449a4ea46e4c1d"} Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.092518 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7d9fr" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.093981 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.094941 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654869dd86-s9th4" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.095148 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.095776 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d95d-account-create-update-vh8v8" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.114865 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.120755 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.125115 4746 scope.go:117] "RemoveContainer" containerID="59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.136280 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.142781 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.200471 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d95d-account-create-update-vh8v8"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.201624 4746 scope.go:117] "RemoveContainer" containerID="0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.202109 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f\": container with ID starting with 0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f not found: ID does not exist" containerID="0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.202158 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f"} err="failed to get container status \"0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f\": rpc error: code = NotFound desc = could not find container \"0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f\": container with ID starting with 0ad2348dbf83eda1a0cb8b1a424c4bfa6b0f2333c534ed07cc55d82bc335b80f not found: ID does not exist" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.202206 4746 scope.go:117] "RemoveContainer" containerID="59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.204231 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e\": container with ID starting with 59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e not found: ID does not exist" containerID="59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.204274 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e"} err="failed to get container status \"59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e\": rpc error: code = NotFound desc = could not find container \"59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e\": container with ID starting with 59b9a847772707b8fad4dc8917109d433ee4251068be83dbd7fb066fda274e0e not found: ID does not exist" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.204301 4746 scope.go:117] "RemoveContainer" containerID="cc32a899c7d388e4214b155041db0c6e217f8e04fb43bb99bba408d5f272016d" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.210071 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d95d-account-create-update-vh8v8"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.218685 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-654869dd86-s9th4"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.219321 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.228008 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.233000 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-654869dd86-s9th4"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.237413 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.237484 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerName="nova-cell1-conductor-conductor" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.238515 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7d9fr"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.244124 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7d9fr"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.305894 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhhg2\" (UniqueName: \"kubernetes.io/projected/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-kube-api-access-nhhg2\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.305944 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1de36f4e-a20f-421c-a50f-8dc90f3c01a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.335334 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.393371 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cdeb76e4-0143-44ad-935d-eb486d6fa9dc/ovn-northd/0.log" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.393439 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.442395 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514585 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kolla-config\") pod \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514670 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-config-data\") pod \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514706 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-metrics-certs-tls-certs\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514742 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7swmr\" (UniqueName: \"kubernetes.io/projected/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kube-api-access-7swmr\") pod \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514768 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-memcached-tls-certs\") pod \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514804 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-northd-tls-certs\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514882 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5mv\" (UniqueName: \"kubernetes.io/projected/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-kube-api-access-wb5mv\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514917 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-rundir\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514967 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-combined-ca-bundle\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.514990 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-combined-ca-bundle\") pod \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\" (UID: \"b931fc5d-d5c3-429f-9c40-073a56aed3ba\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.515025 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-config\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.515080 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-scripts\") pod \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\" (UID: \"cdeb76e4-0143-44ad-935d-eb486d6fa9dc\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.516359 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-config-data" (OuterVolumeSpecName: "config-data") pod "b931fc5d-d5c3-429f-9c40-073a56aed3ba" (UID: "b931fc5d-d5c3-429f-9c40-073a56aed3ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.516498 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-scripts" (OuterVolumeSpecName: "scripts") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.516770 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b931fc5d-d5c3-429f-9c40-073a56aed3ba" (UID: "b931fc5d-d5c3-429f-9c40-073a56aed3ba"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.517157 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.517837 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-config" (OuterVolumeSpecName: "config") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.521073 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-kube-api-access-wb5mv" (OuterVolumeSpecName: "kube-api-access-wb5mv") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "kube-api-access-wb5mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.521889 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kube-api-access-7swmr" (OuterVolumeSpecName: "kube-api-access-7swmr") pod "b931fc5d-d5c3-429f-9c40-073a56aed3ba" (UID: "b931fc5d-d5c3-429f-9c40-073a56aed3ba"). InnerVolumeSpecName "kube-api-access-7swmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.543225 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b931fc5d-d5c3-429f-9c40-073a56aed3ba" (UID: "b931fc5d-d5c3-429f-9c40-073a56aed3ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.554230 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.575481 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "b931fc5d-d5c3-429f-9c40-073a56aed3ba" (UID: "b931fc5d-d5c3-429f-9c40-073a56aed3ba"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.581606 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.583970 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "cdeb76e4-0143-44ad-935d-eb486d6fa9dc" (UID: "cdeb76e4-0143-44ad-935d-eb486d6fa9dc"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.616571 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-combined-ca-bundle\") pod \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.616658 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-config-data\") pod \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.616717 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p44v\" (UniqueName: \"kubernetes.io/projected/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-kube-api-access-5p44v\") pod \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\" (UID: \"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0\") " Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617230 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617251 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617265 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7swmr\" (UniqueName: \"kubernetes.io/projected/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kube-api-access-7swmr\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617276 4746 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617288 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617299 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5mv\" (UniqueName: \"kubernetes.io/projected/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-kube-api-access-wb5mv\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617311 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617322 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617333 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b931fc5d-d5c3-429f-9c40-073a56aed3ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617344 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617357 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdeb76e4-0143-44ad-935d-eb486d6fa9dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.617370 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b931fc5d-d5c3-429f-9c40-073a56aed3ba-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.620430 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-kube-api-access-5p44v" (OuterVolumeSpecName: "kube-api-access-5p44v") pod "cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" (UID: "cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0"). InnerVolumeSpecName "kube-api-access-5p44v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.637344 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.638063 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.638228 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-config-data" (OuterVolumeSpecName: "config-data") pod "cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" (UID: "cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.638526 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.638719 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.638793 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.640485 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.643041 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.643080 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.647012 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" (UID: "cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.719659 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.719705 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.719742 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:57 crc kubenswrapper[4746]: E0129 16:59:57.719761 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data podName:6b6e0a39-5c0e-4632-bc24-dd8c7eb25788 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:05.719736873 +0000 UTC m=+1528.120321547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data") pod "rabbitmq-cell1-server-0" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:59:57 crc kubenswrapper[4746]: I0129 16:59:57.719796 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p44v\" (UniqueName: \"kubernetes.io/projected/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0-kube-api-access-5p44v\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.104735 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0","Type":"ContainerDied","Data":"6dbc5f0d9de373c9e7d441dc013fc5d8642b85a90c3d3a29cc15b679dc6d9ffa"} Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.104782 4746 scope.go:117] "RemoveContainer" containerID="905b77286caae8fe8cc41a3ef217188e7b8ffc2bb6afaa0beb0c0d8d97e9993e" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.104881 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.109060 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b931fc5d-d5c3-429f-9c40-073a56aed3ba","Type":"ContainerDied","Data":"aeb70121e546a3a95569fc239d4e603ec65c4e29bff7728db4450182ad58056a"} Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.109165 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.116709 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_cdeb76e4-0143-44ad-935d-eb486d6fa9dc/ovn-northd/0.log" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.116752 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"cdeb76e4-0143-44ad-935d-eb486d6fa9dc","Type":"ContainerDied","Data":"dab3202467864c8b353aace393406de99fdb3c04d92135b6af1ed8c3b732dacc"} Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.116854 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.131014 4746 scope.go:117] "RemoveContainer" containerID="97ce90f5b14d69f5966c8a456653fa79fce41aed308c4ece923536d92ee0a358" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.145765 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.166346 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.168021 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-6f4c9c876f-dbjbj" podUID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.156:5000/v3\": read tcp 10.217.0.2:42590->10.217.0.156:5000: read: connection reset by peer" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.172046 4746 scope.go:117] "RemoveContainer" containerID="8d41c00ff4e878b0ca19eebfb37df14fb06c2ce7bba3e45e02c666faf55cdc88" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.173631 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.180470 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.186251 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.191078 4746 scope.go:117] "RemoveContainer" containerID="251102a7b2932fe8f3fe5746847e719acf1c4b919ae6e958352358332d1b7b34" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.193237 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 29 16:59:58 crc kubenswrapper[4746]: E0129 16:59:58.226967 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:59:58 crc kubenswrapper[4746]: E0129 16:59:58.227049 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data podName:71c96526-7c37-42c2-896e-b551dd6ed5b8 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:06.227031463 +0000 UTC m=+1528.627616107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data") pod "rabbitmq-server-0" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8") : configmap "rabbitmq-config-data" not found Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.462704 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1de36f4e-a20f-421c-a50f-8dc90f3c01a5" path="/var/lib/kubelet/pods/1de36f4e-a20f-421c-a50f-8dc90f3c01a5/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.463262 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" path="/var/lib/kubelet/pods/33cf45d3-8c95-4453-9a1e-46ad14bce822/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.463820 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" path="/var/lib/kubelet/pods/4cfbdab6-8b6e-4199-808c-be07e373df64/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.464846 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" path="/var/lib/kubelet/pods/4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.465510 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76935545-e8e3-4523-97b0-edce25c6756d" path="/var/lib/kubelet/pods/76935545-e8e3-4523-97b0-edce25c6756d/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.466079 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5213774-9475-450b-a26f-2212d807c39f" path="/var/lib/kubelet/pods/a5213774-9475-450b-a26f-2212d807c39f/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.467090 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" path="/var/lib/kubelet/pods/b931fc5d-d5c3-429f-9c40-073a56aed3ba/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.467596 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" path="/var/lib/kubelet/pods/cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.468114 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" path="/var/lib/kubelet/pods/cdeb76e4-0143-44ad-935d-eb486d6fa9dc/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.469104 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" path="/var/lib/kubelet/pods/cf727c52-99b6-4ab8-9815-4ab8c2dd5050/volumes" Jan 29 16:59:58 crc kubenswrapper[4746]: I0129 16:59:58.675256 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 29 16:59:59 crc kubenswrapper[4746]: I0129 16:59:59.414474 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.131806 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw"] Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.136132 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerName="nova-scheduler-scheduler" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.136454 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerName="nova-scheduler-scheduler" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.136534 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="openstack-network-exporter" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.136643 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="openstack-network-exporter" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.136752 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="ovn-northd" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.136832 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="ovn-northd" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.136909 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5213774-9475-450b-a26f-2212d807c39f" containerName="mariadb-account-create-update" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.136976 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5213774-9475-450b-a26f-2212d807c39f" containerName="mariadb-account-create-update" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.137069 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" containerName="memcached" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.137167 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" containerName="memcached" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.137282 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="probe" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.137373 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="probe" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.137488 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.137560 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.137633 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="cinder-scheduler" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.137711 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="cinder-scheduler" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.137793 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.137873 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.137952 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" containerName="nova-cell0-conductor-conductor" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.138036 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" containerName="nova-cell0-conductor-conductor" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.138127 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.138431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-log" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.138566 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.138652 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.138737 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-httpd" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.138807 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-httpd" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.138875 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.138950 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-log" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.139024 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-httpd" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.139100 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-httpd" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.139174 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140259 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140313 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5213774-9475-450b-a26f-2212d807c39f" containerName="mariadb-account-create-update" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140321 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5213774-9475-450b-a26f-2212d807c39f" containerName="mariadb-account-create-update" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140337 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8f1a81-ca32-4335-be69-a9159ede91fa" containerName="kube-state-metrics" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140344 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8f1a81-ca32-4335-be69-a9159ede91fa" containerName="kube-state-metrics" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140352 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140359 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-api" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140375 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140381 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140397 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-metadata" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140404 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-metadata" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140421 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140430 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-log" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140443 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140451 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-api" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.140459 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140465 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140749 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140762 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cf45d3-8c95-4453-9a1e-46ad14bce822" containerName="barbican-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140772 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-httpd" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140783 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8f1a81-ca32-4335-be69-a9159ede91fa" containerName="kube-state-metrics" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140795 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-metadata" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140808 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140813 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140822 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5213774-9475-450b-a26f-2212d807c39f" containerName="mariadb-account-create-update" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140831 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140838 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="cinder-scheduler" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140848 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140858 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="openstack-network-exporter" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140867 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf727c52-99b6-4ab8-9815-4ab8c2dd5050" containerName="cinder-api" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140878 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="76935545-e8e3-4523-97b0-edce25c6756d" containerName="probe" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140886 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db12a59-b8e4-43e4-add4-9cb361cfe6c5" containerName="placement-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140896 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" containerName="memcached" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140903 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93a42f7-a972-44c2-a2a4-5f698ba4caf7" containerName="glance-httpd" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140911 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd6dc02-1269-43b8-a1aa-d239875e4902" containerName="nova-metadata-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140922 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5213774-9475-450b-a26f-2212d807c39f" containerName="mariadb-account-create-update" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140933 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfbdab6-8b6e-4199-808c-be07e373df64" containerName="nova-api-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140942 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdeb76e4-0143-44ad-935d-eb486d6fa9dc" containerName="ovn-northd" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140952 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e047f91-5bb0-4f45-9ae4-8d2a6eb0d8f6" containerName="glance-log" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140963 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbc4caa-43b8-42c2-83ae-e2448dda745f" containerName="nova-cell0-conductor-conductor" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.140971 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1f5ca4-131c-43b7-ab4f-d9efe3b9dae0" containerName="nova-scheduler-scheduler" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.141555 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.144476 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.151907 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.159783 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw"] Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.267003 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/077397e1-1b51-4160-bbd5-8d44b9e9bae3-secret-volume\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.267359 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077397e1-1b51-4160-bbd5-8d44b9e9bae3-config-volume\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.267525 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fprg\" (UniqueName: \"kubernetes.io/projected/077397e1-1b51-4160-bbd5-8d44b9e9bae3-kube-api-access-6fprg\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.368864 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077397e1-1b51-4160-bbd5-8d44b9e9bae3-config-volume\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.368976 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fprg\" (UniqueName: \"kubernetes.io/projected/077397e1-1b51-4160-bbd5-8d44b9e9bae3-kube-api-access-6fprg\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.369031 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/077397e1-1b51-4160-bbd5-8d44b9e9bae3-secret-volume\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.371546 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077397e1-1b51-4160-bbd5-8d44b9e9bae3-config-volume\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.380440 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/077397e1-1b51-4160-bbd5-8d44b9e9bae3-secret-volume\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.396230 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fprg\" (UniqueName: \"kubernetes.io/projected/077397e1-1b51-4160-bbd5-8d44b9e9bae3-kube-api-access-6fprg\") pod \"collect-profiles-29495100-76rfw\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.463943 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.819320 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.822001 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.824681 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 17:00:00 crc kubenswrapper[4746]: E0129 17:00:00.824798 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="galera" Jan 29 17:00:00 crc kubenswrapper[4746]: I0129 17:00:00.893389 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw"] Jan 29 17:00:01 crc kubenswrapper[4746]: I0129 17:00:01.156620 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" event={"ID":"077397e1-1b51-4160-bbd5-8d44b9e9bae3","Type":"ContainerStarted","Data":"15501e3c224da82ad7c7d2079ec1f41a1986b3ac9e2f31f272b00898c9bcb7bb"} Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.171336 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.173494 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.174838 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.174885 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="galera" Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.214070 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6 is running failed: container process not found" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.214493 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6 is running failed: container process not found" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.214868 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6 is running failed: container process not found" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.214894 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerName="nova-cell1-conductor-conductor" Jan 29 17:00:02 crc kubenswrapper[4746]: I0129 17:00:02.305246 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="b931fc5d-d5c3-429f-9c40-073a56aed3ba" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: i/o timeout" Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.638261 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.638735 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.639504 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.639609 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.639648 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.642504 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.643950 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:02 crc kubenswrapper[4746]: E0129 17:00:02.644020 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:04 crc kubenswrapper[4746]: I0129 17:00:04.445952 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:00:04 crc kubenswrapper[4746]: E0129 17:00:04.446677 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.693412 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.701227 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.711482 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:05 crc kubenswrapper[4746]: E0129 17:00:05.762336 4746 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 17:00:05 crc kubenswrapper[4746]: E0129 17:00:05.762405 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data podName:6b6e0a39-5c0e-4632-bc24-dd8c7eb25788 nodeName:}" failed. No retries permitted until 2026-01-29 17:00:21.762385679 +0000 UTC m=+1544.162970323 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data") pod "rabbitmq-cell1-server-0" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788") : configmap "rabbitmq-cell1-config-data" not found Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.807932 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864756 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srb69\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-kube-api-access-srb69\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864825 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-erlang-cookie\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864852 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c96526-7c37-42c2-896e-b551dd6ed5b8-erlang-cookie-secret\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864879 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-plugins-conf\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864916 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864941 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-plugins\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.864978 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-plugins-conf\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865052 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865088 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-tls\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865109 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865137 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-combined-ca-bundle\") pod \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865202 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2lp8\" (UniqueName: \"kubernetes.io/projected/b98c0c71-5d0c-48b2-a7d6-515a44ded344-kube-api-access-j2lp8\") pod \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865240 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-server-conf\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865275 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865302 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-tls\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865544 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-erlang-cookie\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865587 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-plugins\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865612 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-confd\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865639 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-erlang-cookie-secret\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865669 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c96526-7c37-42c2-896e-b551dd6ed5b8-pod-info\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865777 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-confd\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865818 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-config-data\") pod \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\" (UID: \"b98c0c71-5d0c-48b2-a7d6-515a44ded344\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865842 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-server-conf\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-pod-info\") pod \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\" (UID: \"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.865919 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz4l2\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-kube-api-access-cz4l2\") pod \"71c96526-7c37-42c2-896e-b551dd6ed5b8\" (UID: \"71c96526-7c37-42c2-896e-b551dd6ed5b8\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.867456 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.868254 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.870721 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.870819 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-kube-api-access-srb69" (OuterVolumeSpecName: "kube-api-access-srb69") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "kube-api-access-srb69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.873396 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.876016 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.876107 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.877825 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98c0c71-5d0c-48b2-a7d6-515a44ded344-kube-api-access-j2lp8" (OuterVolumeSpecName: "kube-api-access-j2lp8") pod "b98c0c71-5d0c-48b2-a7d6-515a44ded344" (UID: "b98c0c71-5d0c-48b2-a7d6-515a44ded344"). InnerVolumeSpecName "kube-api-access-j2lp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.878597 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/71c96526-7c37-42c2-896e-b551dd6ed5b8-pod-info" (OuterVolumeSpecName: "pod-info") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.878650 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c96526-7c37-42c2-896e-b551dd6ed5b8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.878967 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.879908 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.882746 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.882979 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.883728 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.886041 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-kube-api-access-cz4l2" (OuterVolumeSpecName: "kube-api-access-cz4l2") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "kube-api-access-cz4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.886319 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-pod-info" (OuterVolumeSpecName: "pod-info") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.891216 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.899483 4746 generic.go:334] "Generic (PLEG): container finished" podID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" containerID="7d91e45479b9bc92a37b60229bed29f47cbec6a2f001ef73702b8bf9cbd0a8be" exitCode=0 Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.899562 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f4c9c876f-dbjbj" event={"ID":"2d2a3529-662b-4eb6-aebd-c15e694cab4e","Type":"ContainerDied","Data":"7d91e45479b9bc92a37b60229bed29f47cbec6a2f001ef73702b8bf9cbd0a8be"} Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.916369 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-config-data" (OuterVolumeSpecName: "config-data") pod "b98c0c71-5d0c-48b2-a7d6-515a44ded344" (UID: "b98c0c71-5d0c-48b2-a7d6-515a44ded344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.930364 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98c0c71-5d0c-48b2-a7d6-515a44ded344" (UID: "b98c0c71-5d0c-48b2-a7d6-515a44ded344"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.934703 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data" (OuterVolumeSpecName: "config-data") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.936400 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.942972 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data" (OuterVolumeSpecName: "config-data") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.958496 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-server-conf" (OuterVolumeSpecName: "server-conf") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.959735 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-server-conf" (OuterVolumeSpecName: "server-conf") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970027 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-ceilometer-tls-certs\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970130 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lwl7\" (UniqueName: \"kubernetes.io/projected/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-kube-api-access-9lwl7\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970153 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-scripts\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970217 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-sg-core-conf-yaml\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970244 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-log-httpd\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970279 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-run-httpd\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970305 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-config-data\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970340 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-combined-ca-bundle\") pod \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\" (UID: \"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90\") " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.970835 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.974096 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-scripts" (OuterVolumeSpecName: "scripts") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.974405 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979311 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979351 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979364 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979375 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2lp8\" (UniqueName: \"kubernetes.io/projected/b98c0c71-5d0c-48b2-a7d6-515a44ded344-kube-api-access-j2lp8\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979385 4746 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979398 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979407 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979415 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979426 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979435 4746 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979443 4746 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c96526-7c37-42c2-896e-b551dd6ed5b8-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979452 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979460 4746 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979468 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c0c71-5d0c-48b2-a7d6-515a44ded344-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979476 4746 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979483 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979493 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz4l2\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-kube-api-access-cz4l2\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979502 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979509 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srb69\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-kube-api-access-srb69\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979517 4746 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c96526-7c37-42c2-896e-b551dd6ed5b8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979526 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979535 4746 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979543 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979550 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979558 4746 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.979567 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c96526-7c37-42c2-896e-b551dd6ed5b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.996287 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-kube-api-access-9lwl7" (OuterVolumeSpecName: "kube-api-access-9lwl7") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "kube-api-access-9lwl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:05 crc kubenswrapper[4746]: I0129 17:00:05.997485 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.008801 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "71c96526-7c37-42c2-896e-b551dd6ed5b8" (UID: "71c96526-7c37-42c2-896e-b551dd6ed5b8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.013597 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" (UID: "6b6e0a39-5c0e-4632-bc24-dd8c7eb25788"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.014689 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.035354 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.038114 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.066926 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.075171 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080215 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data-custom\") pod \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080259 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-scripts\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080297 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d23b1-5d41-40a9-88ee-23a039de0ed7-logs\") pod \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080328 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-credential-keys\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-config-data\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gsq9\" (UniqueName: \"kubernetes.io/projected/f19d23b1-5d41-40a9-88ee-23a039de0ed7-kube-api-access-9gsq9\") pod \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080461 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-fernet-keys\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080502 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-combined-ca-bundle\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080523 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-public-tls-certs\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080549 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cbtc\" (UniqueName: \"kubernetes.io/projected/2d2a3529-662b-4eb6-aebd-c15e694cab4e-kube-api-access-6cbtc\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080595 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-internal-tls-certs\") pod \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\" (UID: \"2d2a3529-662b-4eb6-aebd-c15e694cab4e\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080650 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data\") pod \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.080688 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-combined-ca-bundle\") pod \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\" (UID: \"f19d23b1-5d41-40a9-88ee-23a039de0ed7\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081053 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lwl7\" (UniqueName: \"kubernetes.io/projected/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-kube-api-access-9lwl7\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081075 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c96526-7c37-42c2-896e-b551dd6ed5b8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081089 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081100 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081113 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081124 4746 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081136 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.081147 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.085529 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19d23b1-5d41-40a9-88ee-23a039de0ed7-kube-api-access-9gsq9" (OuterVolumeSpecName: "kube-api-access-9gsq9") pod "f19d23b1-5d41-40a9-88ee-23a039de0ed7" (UID: "f19d23b1-5d41-40a9-88ee-23a039de0ed7"). InnerVolumeSpecName "kube-api-access-9gsq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.085547 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19d23b1-5d41-40a9-88ee-23a039de0ed7-logs" (OuterVolumeSpecName: "logs") pod "f19d23b1-5d41-40a9-88ee-23a039de0ed7" (UID: "f19d23b1-5d41-40a9-88ee-23a039de0ed7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.088150 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.089962 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-scripts" (OuterVolumeSpecName: "scripts") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.090061 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f19d23b1-5d41-40a9-88ee-23a039de0ed7" (UID: "f19d23b1-5d41-40a9-88ee-23a039de0ed7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.097580 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-config-data" (OuterVolumeSpecName: "config-data") pod "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" (UID: "8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.097647 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2a3529-662b-4eb6-aebd-c15e694cab4e-kube-api-access-6cbtc" (OuterVolumeSpecName: "kube-api-access-6cbtc") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "kube-api-access-6cbtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.101702 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.116114 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f19d23b1-5d41-40a9-88ee-23a039de0ed7" (UID: "f19d23b1-5d41-40a9-88ee-23a039de0ed7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.116820 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-config-data" (OuterVolumeSpecName: "config-data") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.124822 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.129445 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data" (OuterVolumeSpecName: "config-data") pod "f19d23b1-5d41-40a9-88ee-23a039de0ed7" (UID: "f19d23b1-5d41-40a9-88ee-23a039de0ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.131468 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.143200 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d2a3529-662b-4eb6-aebd-c15e694cab4e" (UID: "2d2a3529-662b-4eb6-aebd-c15e694cab4e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182112 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-galera-tls-certs\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182164 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kolla-config\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182243 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-combined-ca-bundle\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182282 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-default\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182310 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm97q\" (UniqueName: \"kubernetes.io/projected/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kube-api-access-nm97q\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182329 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-generated\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182422 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-operator-scripts\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182446 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\" (UID: \"1f5617cc-a91a-4eb7-83d9-25f01bcb890c\") " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182725 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182755 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182768 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182777 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182786 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f19d23b1-5d41-40a9-88ee-23a039de0ed7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182795 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182804 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d23b1-5d41-40a9-88ee-23a039de0ed7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182819 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182836 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182855 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gsq9\" (UniqueName: \"kubernetes.io/projected/f19d23b1-5d41-40a9-88ee-23a039de0ed7-kube-api-access-9gsq9\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182867 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182878 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182889 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d2a3529-662b-4eb6-aebd-c15e694cab4e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.182901 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cbtc\" (UniqueName: \"kubernetes.io/projected/2d2a3529-662b-4eb6-aebd-c15e694cab4e-kube-api-access-6cbtc\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.183077 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.183611 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.183818 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.184130 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.186660 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kube-api-access-nm97q" (OuterVolumeSpecName: "kube-api-access-nm97q") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "kube-api-access-nm97q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.194586 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.204153 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.218431 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "1f5617cc-a91a-4eb7-83d9-25f01bcb890c" (UID: "1f5617cc-a91a-4eb7-83d9-25f01bcb890c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283844 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283917 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283931 4746 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283943 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283953 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283961 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283972 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm97q\" (UniqueName: \"kubernetes.io/projected/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-kube-api-access-nm97q\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.283983 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1f5617cc-a91a-4eb7-83d9-25f01bcb890c-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.299490 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.385661 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.920007 4746 generic.go:334] "Generic (PLEG): container finished" podID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerID="ffe4f88f98c0c616c8a6607cb72e6acd7cdee0142ea8746e929924d4801cbfca" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.920375 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c4b578977-hfn59" event={"ID":"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6","Type":"ContainerDied","Data":"ffe4f88f98c0c616c8a6607cb72e6acd7cdee0142ea8746e929924d4801cbfca"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.920406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c4b578977-hfn59" event={"ID":"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6","Type":"ContainerDied","Data":"8eb516ab6b5ac8c4484816892e0ffa113522b32dba4f06cfafecea6cf9b07400"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.920415 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb516ab6b5ac8c4484816892e0ffa113522b32dba4f06cfafecea6cf9b07400" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.924018 4746 generic.go:334] "Generic (PLEG): container finished" podID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.924137 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.924263 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b98c0c71-5d0c-48b2-a7d6-515a44ded344","Type":"ContainerDied","Data":"c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.924300 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b98c0c71-5d0c-48b2-a7d6-515a44ded344","Type":"ContainerDied","Data":"a5f3c84cfbc0d4ea75bbf22cea92fba730c076c53b884dd9c1577ea98d4f9dd9"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.924318 4746 scope.go:117] "RemoveContainer" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.928431 4746 generic.go:334] "Generic (PLEG): container finished" podID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerID="9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.928525 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.929033 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788","Type":"ContainerDied","Data":"9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.929061 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6b6e0a39-5c0e-4632-bc24-dd8c7eb25788","Type":"ContainerDied","Data":"6e3e128dbba555ec4c780af1e913290c42f8c71e02b73ce12f0257e660f557b5"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.936349 4746 generic.go:334] "Generic (PLEG): container finished" podID="077397e1-1b51-4160-bbd5-8d44b9e9bae3" containerID="988c5a35bb2e0bd807e06c35e8d60fa439e114ffe814729777da1c328761aa75" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.936392 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" event={"ID":"077397e1-1b51-4160-bbd5-8d44b9e9bae3","Type":"ContainerDied","Data":"988c5a35bb2e0bd807e06c35e8d60fa439e114ffe814729777da1c328761aa75"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.939125 4746 generic.go:334] "Generic (PLEG): container finished" podID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.939218 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f5617cc-a91a-4eb7-83d9-25f01bcb890c","Type":"ContainerDied","Data":"fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.939242 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1f5617cc-a91a-4eb7-83d9-25f01bcb890c","Type":"ContainerDied","Data":"024a529f91d69d5cf5a9f12dd61efc57c8420b0fc6d3303d5cdbf2aacc49ebc1"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.939293 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.940809 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c4b578977-hfn59" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.943573 4746 generic.go:334] "Generic (PLEG): container finished" podID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerID="6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.943618 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71c96526-7c37-42c2-896e-b551dd6ed5b8","Type":"ContainerDied","Data":"6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.943640 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71c96526-7c37-42c2-896e-b551dd6ed5b8","Type":"ContainerDied","Data":"cf5729bd3a486d44a7f78af891a024c22f7b6654529830bd75f2d4e5b8ae9ac7"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.943690 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.947415 4746 generic.go:334] "Generic (PLEG): container finished" podID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerID="d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.947448 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" event={"ID":"f19d23b1-5d41-40a9-88ee-23a039de0ed7","Type":"ContainerDied","Data":"d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.947464 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" event={"ID":"f19d23b1-5d41-40a9-88ee-23a039de0ed7","Type":"ContainerDied","Data":"0a98efe44a1ee27a87394b20d9217e71dce3cd9b050ec278874d8ac9ca1f3676"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.947505 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-fd8d7b7c5-2bjng" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.949264 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f4c9c876f-dbjbj" event={"ID":"2d2a3529-662b-4eb6-aebd-c15e694cab4e","Type":"ContainerDied","Data":"6dca297ae2dd008725aa87fdc754f211aa331cee07a88050dc891fc934a0ee29"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.949343 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f4c9c876f-dbjbj" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.959131 4746 generic.go:334] "Generic (PLEG): container finished" podID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerID="304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.959208 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerDied","Data":"304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.959233 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90","Type":"ContainerDied","Data":"cca101ed6256bd0aa70fc88711464664181324285715d949907a3f96a1808385"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.959302 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.962555 4746 generic.go:334] "Generic (PLEG): container finished" podID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerID="0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c" exitCode=0 Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.962646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"717a3fe2-fd76-47c2-b7f2-859dd5186f9c","Type":"ContainerDied","Data":"0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c"} Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.963636 4746 scope.go:117] "RemoveContainer" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" Jan 29 17:00:06 crc kubenswrapper[4746]: E0129 17:00:06.964378 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6\": container with ID starting with c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6 not found: ID does not exist" containerID="c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.964415 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6"} err="failed to get container status \"c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6\": rpc error: code = NotFound desc = could not find container \"c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6\": container with ID starting with c42e5afb8b7a06c06a92aa92a208428957e87cb86fec2a4d636a6a81f8cd56d6 not found: ID does not exist" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.964440 4746 scope.go:117] "RemoveContainer" containerID="9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553" Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.973260 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.978582 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:00:06 crc kubenswrapper[4746]: I0129 17:00:06.997003 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.001063 4746 scope.go:117] "RemoveContainer" containerID="560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.012533 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.021879 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.030644 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.044719 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.053939 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.063295 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.069748 4746 scope.go:117] "RemoveContainer" containerID="9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.074884 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553\": container with ID starting with 9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553 not found: ID does not exist" containerID="9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.074933 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553"} err="failed to get container status \"9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553\": rpc error: code = NotFound desc = could not find container \"9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553\": container with ID starting with 9ef1dfa245dcfd318392710840f6fb705b4c16755e3c7a82e39f94cda600d553 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.074965 4746 scope.go:117] "RemoveContainer" containerID="560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.076026 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255\": container with ID starting with 560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255 not found: ID does not exist" containerID="560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.076068 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255"} err="failed to get container status \"560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255\": rpc error: code = NotFound desc = could not find container \"560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255\": container with ID starting with 560d711246e163edbdc5c28dea97147d0d3aa9c5a8de0096c0379037f4cf8255 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.076101 4746 scope.go:117] "RemoveContainer" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.078240 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-fd8d7b7c5-2bjng"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096054 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-ovndb-tls-certs\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096180 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85km\" (UniqueName: \"kubernetes.io/projected/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-kube-api-access-q85km\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096499 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-combined-ca-bundle\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096551 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-httpd-config\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096623 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-internal-tls-certs\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096675 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-public-tls-certs\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.096764 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-config\") pod \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\" (UID: \"da3e5e7d-45e7-4ee6-a400-bd00932ea1d6\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.104340 4746 scope.go:117] "RemoveContainer" containerID="283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.105837 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-kube-api-access-q85km" (OuterVolumeSpecName: "kube-api-access-q85km") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "kube-api-access-q85km". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.108511 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.109859 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-fd8d7b7c5-2bjng"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.130394 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.137382 4746 scope.go:117] "RemoveContainer" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.141880 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3\": container with ID starting with fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3 not found: ID does not exist" containerID="fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.141925 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3"} err="failed to get container status \"fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3\": rpc error: code = NotFound desc = could not find container \"fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3\": container with ID starting with fd23c3a639d9a2d5d3276295ebfe9f8f862ed7208af12834f65829badf837ff3 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.142270 4746 scope.go:117] "RemoveContainer" containerID="283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.142646 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c\": container with ID starting with 283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c not found: ID does not exist" containerID="283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.142679 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c"} err="failed to get container status \"283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c\": rpc error: code = NotFound desc = could not find container \"283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c\": container with ID starting with 283580ae6f82f7a75739084f24ea041bf6880f7dd027a59bf2b593dc9f800a6c not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.142699 4746 scope.go:117] "RemoveContainer" containerID="6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.146979 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.155941 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.156539 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-config" (OuterVolumeSpecName: "config") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.161477 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.163678 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6f4c9c876f-dbjbj"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.165947 4746 scope.go:117] "RemoveContainer" containerID="f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.169624 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6f4c9c876f-dbjbj"] Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.192421 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.192943 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" (UID: "da3e5e7d-45e7-4ee6-a400-bd00932ea1d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.196278 4746 scope.go:117] "RemoveContainer" containerID="6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.196658 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45\": container with ID starting with 6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45 not found: ID does not exist" containerID="6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.196685 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45"} err="failed to get container status \"6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45\": rpc error: code = NotFound desc = could not find container \"6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45\": container with ID starting with 6db84eff7050bf0a0a368590b96e7c8d0a5f84cfd8adfb7c17f79f4f28749d45 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.196705 4746 scope.go:117] "RemoveContainer" containerID="f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.197027 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e\": container with ID starting with f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e not found: ID does not exist" containerID="f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.197048 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e"} err="failed to get container status \"f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e\": rpc error: code = NotFound desc = could not find container \"f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e\": container with ID starting with f4387959259397bfbe0b1a694ebd01c4f920d50a50e44b0cdd6ac36bf741373e not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.197061 4746 scope.go:117] "RemoveContainer" containerID="d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198031 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-combined-ca-bundle\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198106 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-galera-tls-certs\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198459 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kolla-config\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198485 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpcw8\" (UniqueName: \"kubernetes.io/projected/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kube-api-access-qpcw8\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198517 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-operator-scripts\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198584 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-generated\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198621 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.198641 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-default\") pod \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\" (UID: \"717a3fe2-fd76-47c2-b7f2-859dd5186f9c\") " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.199235 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.199366 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200016 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200047 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85km\" (UniqueName: \"kubernetes.io/projected/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-kube-api-access-q85km\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200060 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200072 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200084 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200078 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200096 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200153 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200169 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200180 4746 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.200103 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.202551 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kube-api-access-qpcw8" (OuterVolumeSpecName: "kube-api-access-qpcw8") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "kube-api-access-qpcw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.216046 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.217485 4746 scope.go:117] "RemoveContainer" containerID="d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.223978 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.236019 4746 scope.go:117] "RemoveContainer" containerID="d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.236997 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7\": container with ID starting with d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7 not found: ID does not exist" containerID="d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.237036 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7"} err="failed to get container status \"d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7\": rpc error: code = NotFound desc = could not find container \"d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7\": container with ID starting with d768585fe5637c37a0da5582d0c91d9888ffa05025d420d05b438a6231b1fdf7 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.237060 4746 scope.go:117] "RemoveContainer" containerID="d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.237467 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5\": container with ID starting with d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5 not found: ID does not exist" containerID="d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.237507 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5"} err="failed to get container status \"d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5\": rpc error: code = NotFound desc = could not find container \"d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5\": container with ID starting with d6f70d5bdaf36684e4e0141628ae99c39fa90ff5784173d7d1595b34ed5bd6a5 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.237553 4746 scope.go:117] "RemoveContainer" containerID="7d91e45479b9bc92a37b60229bed29f47cbec6a2f001ef73702b8bf9cbd0a8be" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.238635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "717a3fe2-fd76-47c2-b7f2-859dd5186f9c" (UID: "717a3fe2-fd76-47c2-b7f2-859dd5186f9c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.258867 4746 scope.go:117] "RemoveContainer" containerID="66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.274435 4746 scope.go:117] "RemoveContainer" containerID="4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.301223 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.301283 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.301299 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.301312 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.301321 4746 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.301329 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpcw8\" (UniqueName: \"kubernetes.io/projected/717a3fe2-fd76-47c2-b7f2-859dd5186f9c-kube-api-access-qpcw8\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.305492 4746 scope.go:117] "RemoveContainer" containerID="304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.315285 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.349598 4746 scope.go:117] "RemoveContainer" containerID="b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.367060 4746 scope.go:117] "RemoveContainer" containerID="66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.367609 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124\": container with ID starting with 66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124 not found: ID does not exist" containerID="66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.367651 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124"} err="failed to get container status \"66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124\": rpc error: code = NotFound desc = could not find container \"66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124\": container with ID starting with 66fc1890ea9a04c08261708b541ec2a1abfe30b5bc507c466e8159782363c124 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.367677 4746 scope.go:117] "RemoveContainer" containerID="4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.368006 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb\": container with ID starting with 4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb not found: ID does not exist" containerID="4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.368039 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb"} err="failed to get container status \"4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb\": rpc error: code = NotFound desc = could not find container \"4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb\": container with ID starting with 4075cef6716e46c7a2c75f80bc0e3e1b2948987eb474bc0babfa4b4053279ebb not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.368058 4746 scope.go:117] "RemoveContainer" containerID="304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.368424 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de\": container with ID starting with 304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de not found: ID does not exist" containerID="304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.368449 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de"} err="failed to get container status \"304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de\": rpc error: code = NotFound desc = could not find container \"304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de\": container with ID starting with 304c002e6e7bbb0b10ebaf42fe5740c1c5a7095cc88973261c520bd071c8d0de not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.368466 4746 scope.go:117] "RemoveContainer" containerID="b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.368703 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82\": container with ID starting with b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82 not found: ID does not exist" containerID="b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.368730 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82"} err="failed to get container status \"b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82\": rpc error: code = NotFound desc = could not find container \"b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82\": container with ID starting with b441981352e2cf330a4d8716aa8b353ea2211e55f0a9ee85a860945ce2041b82 not found: ID does not exist" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.402848 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.637496 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.637844 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.638067 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.638098 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.639587 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.640878 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.642402 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:07 crc kubenswrapper[4746]: E0129 17:00:07.642441 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.975578 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"717a3fe2-fd76-47c2-b7f2-859dd5186f9c","Type":"ContainerDied","Data":"0eec810ea8f3f534e68b7ca792c37994761f6474f4dac857c3015895a744b0ed"} Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.975672 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.975907 4746 scope.go:117] "RemoveContainer" containerID="0067b9a285ac7dcdacf865b79761b0d1ca9e1d3ad221a0670f3cdf500f2c604c" Jan 29 17:00:07 crc kubenswrapper[4746]: I0129 17:00:07.984264 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c4b578977-hfn59" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.012256 4746 scope.go:117] "RemoveContainer" containerID="ae5a4edf6b68a4c05732cca45dbe163b03db7a46e160be1412e89340c7ef3b1d" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.017576 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.029054 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.036707 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c4b578977-hfn59"] Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.043497 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c4b578977-hfn59"] Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.295040 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.416574 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fprg\" (UniqueName: \"kubernetes.io/projected/077397e1-1b51-4160-bbd5-8d44b9e9bae3-kube-api-access-6fprg\") pod \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.416683 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077397e1-1b51-4160-bbd5-8d44b9e9bae3-config-volume\") pod \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.416773 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/077397e1-1b51-4160-bbd5-8d44b9e9bae3-secret-volume\") pod \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\" (UID: \"077397e1-1b51-4160-bbd5-8d44b9e9bae3\") " Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.417697 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077397e1-1b51-4160-bbd5-8d44b9e9bae3-config-volume" (OuterVolumeSpecName: "config-volume") pod "077397e1-1b51-4160-bbd5-8d44b9e9bae3" (UID: "077397e1-1b51-4160-bbd5-8d44b9e9bae3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.421167 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077397e1-1b51-4160-bbd5-8d44b9e9bae3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "077397e1-1b51-4160-bbd5-8d44b9e9bae3" (UID: "077397e1-1b51-4160-bbd5-8d44b9e9bae3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.421238 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077397e1-1b51-4160-bbd5-8d44b9e9bae3-kube-api-access-6fprg" (OuterVolumeSpecName: "kube-api-access-6fprg") pod "077397e1-1b51-4160-bbd5-8d44b9e9bae3" (UID: "077397e1-1b51-4160-bbd5-8d44b9e9bae3"). InnerVolumeSpecName "kube-api-access-6fprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.455693 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" path="/var/lib/kubelet/pods/1f5617cc-a91a-4eb7-83d9-25f01bcb890c/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.456606 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" path="/var/lib/kubelet/pods/2d2a3529-662b-4eb6-aebd-c15e694cab4e/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.457477 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" path="/var/lib/kubelet/pods/6b6e0a39-5c0e-4632-bc24-dd8c7eb25788/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.458668 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" path="/var/lib/kubelet/pods/717a3fe2-fd76-47c2-b7f2-859dd5186f9c/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.459498 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" path="/var/lib/kubelet/pods/71c96526-7c37-42c2-896e-b551dd6ed5b8/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.460625 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" path="/var/lib/kubelet/pods/8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.461418 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" path="/var/lib/kubelet/pods/b98c0c71-5d0c-48b2-a7d6-515a44ded344/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.462340 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" path="/var/lib/kubelet/pods/da3e5e7d-45e7-4ee6-a400-bd00932ea1d6/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.462877 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" path="/var/lib/kubelet/pods/f19d23b1-5d41-40a9-88ee-23a039de0ed7/volumes" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.518513 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fprg\" (UniqueName: \"kubernetes.io/projected/077397e1-1b51-4160-bbd5-8d44b9e9bae3-kube-api-access-6fprg\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.518693 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077397e1-1b51-4160-bbd5-8d44b9e9bae3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:08 crc kubenswrapper[4746]: I0129 17:00:08.518718 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/077397e1-1b51-4160-bbd5-8d44b9e9bae3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:09 crc kubenswrapper[4746]: I0129 17:00:09.000626 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" event={"ID":"077397e1-1b51-4160-bbd5-8d44b9e9bae3","Type":"ContainerDied","Data":"15501e3c224da82ad7c7d2079ec1f41a1986b3ac9e2f31f272b00898c9bcb7bb"} Jan 29 17:00:09 crc kubenswrapper[4746]: I0129 17:00:09.001024 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15501e3c224da82ad7c7d2079ec1f41a1986b3ac9e2f31f272b00898c9bcb7bb" Jan 29 17:00:09 crc kubenswrapper[4746]: I0129 17:00:09.000666 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-76rfw" Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.638147 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.639139 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.639648 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.639693 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.640304 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.642629 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.644155 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:12 crc kubenswrapper[4746]: E0129 17:00:12.644256 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.637851 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.638596 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.639114 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.639148 4746 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.640152 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.641643 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.643481 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:00:17 crc kubenswrapper[4746]: E0129 17:00:17.643566 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hlgxj" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:19 crc kubenswrapper[4746]: I0129 17:00:19.445618 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:00:19 crc kubenswrapper[4746]: E0129 17:00:19.446100 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.853145 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlgxj_db69fbf3-38bd-403b-b1e6-fbd724d15250/ovs-vswitchd/0.log" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.854206 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.906708 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db69fbf3-38bd-403b-b1e6-fbd724d15250-scripts\") pod \"db69fbf3-38bd-403b-b1e6-fbd724d15250\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.906781 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-run\") pod \"db69fbf3-38bd-403b-b1e6-fbd724d15250\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.906889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-etc-ovs\") pod \"db69fbf3-38bd-403b-b1e6-fbd724d15250\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.906946 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-lib\") pod \"db69fbf3-38bd-403b-b1e6-fbd724d15250\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.906998 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx8qt\" (UniqueName: \"kubernetes.io/projected/db69fbf3-38bd-403b-b1e6-fbd724d15250-kube-api-access-dx8qt\") pod \"db69fbf3-38bd-403b-b1e6-fbd724d15250\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.907043 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-log\") pod \"db69fbf3-38bd-403b-b1e6-fbd724d15250\" (UID: \"db69fbf3-38bd-403b-b1e6-fbd724d15250\") " Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.907673 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-log" (OuterVolumeSpecName: "var-log") pod "db69fbf3-38bd-403b-b1e6-fbd724d15250" (UID: "db69fbf3-38bd-403b-b1e6-fbd724d15250"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.908976 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "db69fbf3-38bd-403b-b1e6-fbd724d15250" (UID: "db69fbf3-38bd-403b-b1e6-fbd724d15250"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.909034 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-run" (OuterVolumeSpecName: "var-run") pod "db69fbf3-38bd-403b-b1e6-fbd724d15250" (UID: "db69fbf3-38bd-403b-b1e6-fbd724d15250"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.910024 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db69fbf3-38bd-403b-b1e6-fbd724d15250-scripts" (OuterVolumeSpecName: "scripts") pod "db69fbf3-38bd-403b-b1e6-fbd724d15250" (UID: "db69fbf3-38bd-403b-b1e6-fbd724d15250"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.910117 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-lib" (OuterVolumeSpecName: "var-lib") pod "db69fbf3-38bd-403b-b1e6-fbd724d15250" (UID: "db69fbf3-38bd-403b-b1e6-fbd724d15250"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.915419 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db69fbf3-38bd-403b-b1e6-fbd724d15250-kube-api-access-dx8qt" (OuterVolumeSpecName: "kube-api-access-dx8qt") pod "db69fbf3-38bd-403b-b1e6-fbd724d15250" (UID: "db69fbf3-38bd-403b-b1e6-fbd724d15250"). InnerVolumeSpecName "kube-api-access-dx8qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.971104 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987015 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wcng"] Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987367 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="galera" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987385 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="galera" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987398 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-api" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987404 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-api" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987417 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerName="nova-cell1-conductor-conductor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987423 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerName="nova-cell1-conductor-conductor" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987428 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="proxy-httpd" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987435 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="proxy-httpd" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987451 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987458 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987466 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-updater" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987473 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-updater" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987481 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987488 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-server" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987498 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="setup-container" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987505 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="setup-container" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987513 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987519 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987535 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-central-agent" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987541 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-central-agent" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987550 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-updater" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987556 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-updater" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987567 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server-init" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987576 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server-init" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987586 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987593 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-server" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987603 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="swift-recon-cron" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987611 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="swift-recon-cron" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987622 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-reaper" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987629 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-reaper" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987639 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077397e1-1b51-4160-bbd5-8d44b9e9bae3" containerName="collect-profiles" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987645 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="077397e1-1b51-4160-bbd5-8d44b9e9bae3" containerName="collect-profiles" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987654 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987662 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987675 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-httpd" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987683 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-httpd" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987695 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="mysql-bootstrap" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987702 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="mysql-bootstrap" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987713 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="galera" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987720 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="galera" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987728 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="rabbitmq" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987735 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="rabbitmq" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987746 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987756 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987769 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="rsync" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987776 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="rsync" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987787 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="sg-core" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987794 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="sg-core" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987805 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987812 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987825 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-expirer" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987833 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-expirer" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987846 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987853 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987863 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987870 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987893 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker-log" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987901 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker-log" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987914 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="rabbitmq" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987921 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="rabbitmq" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987931 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987941 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987952 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987959 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987968 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-notification-agent" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987974 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-notification-agent" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.987985 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.987992 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-server" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.988004 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="mysql-bootstrap" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988011 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="mysql-bootstrap" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.988021 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" containerName="keystone-api" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988028 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" containerName="keystone-api" Jan 29 17:00:21 crc kubenswrapper[4746]: E0129 17:00:21.988038 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="setup-container" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988045 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="setup-container" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988230 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovs-vswitchd" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988246 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerName="ovsdb-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988253 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker-log" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988264 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988276 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-api" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.988287 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="077397e1-1b51-4160-bbd5-8d44b9e9bae3" containerName="collect-profiles" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990001 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-notification-agent" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990038 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19d23b1-5d41-40a9-88ee-23a039de0ed7" containerName="barbican-worker" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990048 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="rsync" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990058 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990066 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990074 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="717a3fe2-fd76-47c2-b7f2-859dd5186f9c" containerName="galera" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990088 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98c0c71-5d0c-48b2-a7d6-515a44ded344" containerName="nova-cell1-conductor-conductor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990098 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c96526-7c37-42c2-896e-b551dd6ed5b8" containerName="rabbitmq" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990105 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990113 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-updater" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990123 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-auditor" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990132 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990138 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3e5e7d-45e7-4ee6-a400-bd00932ea1d6" containerName="neutron-httpd" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990147 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="sg-core" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990156 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5617cc-a91a-4eb7-83d9-25f01bcb890c" containerName="galera" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990165 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-updater" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990173 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2a3529-662b-4eb6-aebd-c15e694cab4e" containerName="keystone-api" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990180 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="proxy-httpd" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990199 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990208 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-reaper" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990217 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aad3209-fb2f-42b9-8fc3-6c3bf4ac0a90" containerName="ceilometer-central-agent" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990229 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="account-replicator" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990236 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="container-server" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990244 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="swift-recon-cron" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990251 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6e0a39-5c0e-4632-bc24-dd8c7eb25788" containerName="rabbitmq" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.990258 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerName="object-expirer" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.991688 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:21 crc kubenswrapper[4746]: I0129 17:00:21.993654 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wcng"] Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.008776 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-combined-ca-bundle\") pod \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-lock\") pod \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009102 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") pod \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009123 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009146 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-656bn\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-kube-api-access-656bn\") pod \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009233 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-cache\") pod \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\" (UID: \"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb\") " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009333 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-utilities\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009385 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzws\" (UniqueName: \"kubernetes.io/projected/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-kube-api-access-nfzws\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009405 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-catalog-content\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009460 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db69fbf3-38bd-403b-b1e6-fbd724d15250-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009470 4746 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009479 4746 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009487 4746 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-lib\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009497 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx8qt\" (UniqueName: \"kubernetes.io/projected/db69fbf3-38bd-403b-b1e6-fbd724d15250-kube-api-access-dx8qt\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.009507 4746 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/db69fbf3-38bd-403b-b1e6-fbd724d15250-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.018218 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-kube-api-access-656bn" (OuterVolumeSpecName: "kube-api-access-656bn") pod "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb"). InnerVolumeSpecName "kube-api-access-656bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.018597 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-lock" (OuterVolumeSpecName: "lock") pod "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.019008 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-cache" (OuterVolumeSpecName: "cache") pod "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.020676 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.024687 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110774 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-utilities\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110851 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzws\" (UniqueName: \"kubernetes.io/projected/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-kube-api-access-nfzws\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110875 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-catalog-content\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110913 4746 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-cache\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110923 4746 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-lock\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110932 4746 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110952 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.110961 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-656bn\" (UniqueName: \"kubernetes.io/projected/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-kube-api-access-656bn\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.111891 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-catalog-content\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.111916 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-utilities\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.125776 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.133938 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzws\" (UniqueName: \"kubernetes.io/projected/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-kube-api-access-nfzws\") pod \"community-operators-7wcng\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.143625 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hlgxj_db69fbf3-38bd-403b-b1e6-fbd724d15250/ovs-vswitchd/0.log" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.144506 4746 generic.go:334] "Generic (PLEG): container finished" podID="db69fbf3-38bd-403b-b1e6-fbd724d15250" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" exitCode=137 Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.144578 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerDied","Data":"b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b"} Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.144653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hlgxj" event={"ID":"db69fbf3-38bd-403b-b1e6-fbd724d15250","Type":"ContainerDied","Data":"901fc0bf3b78a3ffedb25a61b7454f5d6b0326cbc8ff850830b2ff42d479a117"} Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.144711 4746 scope.go:117] "RemoveContainer" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.144566 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hlgxj" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.157940 4746 generic.go:334] "Generic (PLEG): container finished" podID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" containerID="ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8" exitCode=137 Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.157983 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8"} Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.158037 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4434dba0-90da-4ac0-8cd4-5c2babfdb2eb","Type":"ContainerDied","Data":"60d22ac1bcf8571bfff39d1d4b4e99ec8689944c0755be29527d1709185809d1"} Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.158086 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.168223 4746 scope.go:117] "RemoveContainer" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.183413 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-hlgxj"] Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.191168 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-hlgxj"] Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.192804 4746 scope.go:117] "RemoveContainer" containerID="184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.212110 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.216413 4746 scope.go:117] "RemoveContainer" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.217134 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b\": container with ID starting with b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b not found: ID does not exist" containerID="b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.217168 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b"} err="failed to get container status \"b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b\": rpc error: code = NotFound desc = could not find container \"b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b\": container with ID starting with b09afd95cd7ac4a25e730186b7e906e4c6117aff5e4e39526432798079c9961b not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.217221 4746 scope.go:117] "RemoveContainer" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.218999 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032\": container with ID starting with ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 not found: ID does not exist" containerID="ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.219037 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032"} err="failed to get container status \"ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032\": rpc error: code = NotFound desc = could not find container \"ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032\": container with ID starting with ba0ae13a359ed98963aaa33f421b924a118bf2aaa2afab28420307738612f032 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.219063 4746 scope.go:117] "RemoveContainer" containerID="184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.219338 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c\": container with ID starting with 184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c not found: ID does not exist" containerID="184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.219367 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c"} err="failed to get container status \"184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c\": rpc error: code = NotFound desc = could not find container \"184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c\": container with ID starting with 184f704758e9d122d22074b2123d3982f74bfe36009d76604d289055dbc3983c not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.219384 4746 scope.go:117] "RemoveContainer" containerID="ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.241207 4746 scope.go:117] "RemoveContainer" containerID="3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.258757 4746 scope.go:117] "RemoveContainer" containerID="0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.283065 4746 scope.go:117] "RemoveContainer" containerID="d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.287350 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" (UID: "4434dba0-90da-4ac0-8cd4-5c2babfdb2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.304633 4746 scope.go:117] "RemoveContainer" containerID="cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.310411 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.312521 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.323279 4746 scope.go:117] "RemoveContainer" containerID="77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.452368 4746 scope.go:117] "RemoveContainer" containerID="66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.467404 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db69fbf3-38bd-403b-b1e6-fbd724d15250" path="/var/lib/kubelet/pods/db69fbf3-38bd-403b-b1e6-fbd724d15250/volumes" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.488963 4746 scope.go:117] "RemoveContainer" containerID="7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.496715 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.508621 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.546350 4746 scope.go:117] "RemoveContainer" containerID="86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.567062 4746 scope.go:117] "RemoveContainer" containerID="f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.583402 4746 scope.go:117] "RemoveContainer" containerID="9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.603574 4746 scope.go:117] "RemoveContainer" containerID="d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.631213 4746 scope.go:117] "RemoveContainer" containerID="30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.648555 4746 scope.go:117] "RemoveContainer" containerID="55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.665633 4746 scope.go:117] "RemoveContainer" containerID="22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.695303 4746 scope.go:117] "RemoveContainer" containerID="ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.698969 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8\": container with ID starting with ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8 not found: ID does not exist" containerID="ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.699021 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8"} err="failed to get container status \"ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8\": rpc error: code = NotFound desc = could not find container \"ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8\": container with ID starting with ad054e0206c9c5f882e4ea00d5f089c44d6d3306a67b34df0625a102dc63dba8 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.699055 4746 scope.go:117] "RemoveContainer" containerID="3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.699526 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab\": container with ID starting with 3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab not found: ID does not exist" containerID="3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.699586 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab"} err="failed to get container status \"3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab\": rpc error: code = NotFound desc = could not find container \"3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab\": container with ID starting with 3047ff994439d873e577c79a9cb398eb84a749325800266fc24b99e273e057ab not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.699616 4746 scope.go:117] "RemoveContainer" containerID="0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.700026 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844\": container with ID starting with 0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844 not found: ID does not exist" containerID="0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.700051 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844"} err="failed to get container status \"0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844\": rpc error: code = NotFound desc = could not find container \"0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844\": container with ID starting with 0a202460530cacc44c1982bad08a24be5aafe4a4757636c19d2b56c7e6ffc844 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.700069 4746 scope.go:117] "RemoveContainer" containerID="d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.700480 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2\": container with ID starting with d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2 not found: ID does not exist" containerID="d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.700515 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2"} err="failed to get container status \"d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2\": rpc error: code = NotFound desc = could not find container \"d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2\": container with ID starting with d5247f6359a1a826c541cba5cf9678d9792c09cd00166bb7755bd856181038f2 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.700535 4746 scope.go:117] "RemoveContainer" containerID="cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.700968 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103\": container with ID starting with cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103 not found: ID does not exist" containerID="cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.700999 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103"} err="failed to get container status \"cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103\": rpc error: code = NotFound desc = could not find container \"cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103\": container with ID starting with cbc13d2cf6065e4bc258da6420f3b74a547dfec5149354b3ded667fefdef0103 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.701018 4746 scope.go:117] "RemoveContainer" containerID="77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.701308 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce\": container with ID starting with 77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce not found: ID does not exist" containerID="77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.701337 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce"} err="failed to get container status \"77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce\": rpc error: code = NotFound desc = could not find container \"77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce\": container with ID starting with 77db5c4a65945a446fae3e4f6cc2772db47124c2d29b4eb6f41ca6d037cb6cce not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.701359 4746 scope.go:117] "RemoveContainer" containerID="66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.701712 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf\": container with ID starting with 66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf not found: ID does not exist" containerID="66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.701743 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf"} err="failed to get container status \"66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf\": rpc error: code = NotFound desc = could not find container \"66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf\": container with ID starting with 66a8b5cdd44225dcbf11700911c2fbafc96a9b9f4210f586f81ff2147eee9dbf not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.701762 4746 scope.go:117] "RemoveContainer" containerID="7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.702013 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad\": container with ID starting with 7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad not found: ID does not exist" containerID="7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.702041 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad"} err="failed to get container status \"7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad\": rpc error: code = NotFound desc = could not find container \"7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad\": container with ID starting with 7145c49db36ab9eeda63b2bfddda2c07fc0779ae9ac42cf7492d258d119136ad not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.702059 4746 scope.go:117] "RemoveContainer" containerID="86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.702331 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e\": container with ID starting with 86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e not found: ID does not exist" containerID="86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.702368 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e"} err="failed to get container status \"86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e\": rpc error: code = NotFound desc = could not find container \"86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e\": container with ID starting with 86042ac0bc59bb1150382d485938fc33f5c15bcf7a254dc1482b5d9ea792483e not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.702388 4746 scope.go:117] "RemoveContainer" containerID="f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.702680 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a\": container with ID starting with f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a not found: ID does not exist" containerID="f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.702701 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a"} err="failed to get container status \"f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a\": rpc error: code = NotFound desc = could not find container \"f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a\": container with ID starting with f21a32a2779c9081c9221737f402ed7e52775b753468b4cfffdf2e8883c5d23a not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.702717 4746 scope.go:117] "RemoveContainer" containerID="9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.703097 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26\": container with ID starting with 9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26 not found: ID does not exist" containerID="9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.703128 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26"} err="failed to get container status \"9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26\": rpc error: code = NotFound desc = could not find container \"9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26\": container with ID starting with 9cb9c1d867a2e0956c22ff78c454a252fcab1fc587b2b2f59daa0464b4edbf26 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.703148 4746 scope.go:117] "RemoveContainer" containerID="d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.703400 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db\": container with ID starting with d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db not found: ID does not exist" containerID="d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.703421 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db"} err="failed to get container status \"d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db\": rpc error: code = NotFound desc = could not find container \"d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db\": container with ID starting with d30b28c4ea8e2800917cd724dbdbc776cdee073ecb9e01dc40badc88b4e3e1db not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.703434 4746 scope.go:117] "RemoveContainer" containerID="30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.703738 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596\": container with ID starting with 30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596 not found: ID does not exist" containerID="30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.703767 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596"} err="failed to get container status \"30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596\": rpc error: code = NotFound desc = could not find container \"30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596\": container with ID starting with 30dc4fad5a0ead83655222ad04972543c34a636921460ef7c1b9464b492f4596 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.703786 4746 scope.go:117] "RemoveContainer" containerID="55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.704068 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5\": container with ID starting with 55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5 not found: ID does not exist" containerID="55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.704088 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5"} err="failed to get container status \"55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5\": rpc error: code = NotFound desc = could not find container \"55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5\": container with ID starting with 55caa4b5155b214d5c55eec30872c883748559dd9f350de1ffbd42ff50c956f5 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.704099 4746 scope.go:117] "RemoveContainer" containerID="22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160" Jan 29 17:00:22 crc kubenswrapper[4746]: E0129 17:00:22.704447 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160\": container with ID starting with 22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160 not found: ID does not exist" containerID="22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.704478 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160"} err="failed to get container status \"22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160\": rpc error: code = NotFound desc = could not find container \"22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160\": container with ID starting with 22c2a1dd70458a80b06dcbae92693605851e3120737ac1338bb3a15469a96160 not found: ID does not exist" Jan 29 17:00:22 crc kubenswrapper[4746]: I0129 17:00:22.805879 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wcng"] Jan 29 17:00:22 crc kubenswrapper[4746]: W0129 17:00:22.812822 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3e43a0a_0210_4cb4_be5f_e0121b0e1b80.slice/crio-e4bd93ff3bdb792e68ef2334a6c12b2300f3f67bf98b5dc4252bc8ff83d1ff3a WatchSource:0}: Error finding container e4bd93ff3bdb792e68ef2334a6c12b2300f3f67bf98b5dc4252bc8ff83d1ff3a: Status 404 returned error can't find the container with id e4bd93ff3bdb792e68ef2334a6c12b2300f3f67bf98b5dc4252bc8ff83d1ff3a Jan 29 17:00:23 crc kubenswrapper[4746]: I0129 17:00:23.166813 4746 generic.go:334] "Generic (PLEG): container finished" podID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerID="9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103" exitCode=0 Jan 29 17:00:23 crc kubenswrapper[4746]: I0129 17:00:23.166848 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcng" event={"ID":"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80","Type":"ContainerDied","Data":"9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103"} Jan 29 17:00:23 crc kubenswrapper[4746]: I0129 17:00:23.166903 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcng" event={"ID":"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80","Type":"ContainerStarted","Data":"e4bd93ff3bdb792e68ef2334a6c12b2300f3f67bf98b5dc4252bc8ff83d1ff3a"} Jan 29 17:00:24 crc kubenswrapper[4746]: I0129 17:00:24.465343 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4434dba0-90da-4ac0-8cd4-5c2babfdb2eb" path="/var/lib/kubelet/pods/4434dba0-90da-4ac0-8cd4-5c2babfdb2eb/volumes" Jan 29 17:00:25 crc kubenswrapper[4746]: I0129 17:00:25.184980 4746 generic.go:334] "Generic (PLEG): container finished" podID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerID="c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91" exitCode=0 Jan 29 17:00:25 crc kubenswrapper[4746]: I0129 17:00:25.185052 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcng" event={"ID":"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80","Type":"ContainerDied","Data":"c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91"} Jan 29 17:00:26 crc kubenswrapper[4746]: I0129 17:00:26.198562 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcng" event={"ID":"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80","Type":"ContainerStarted","Data":"b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2"} Jan 29 17:00:26 crc kubenswrapper[4746]: I0129 17:00:26.224721 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wcng" podStartSLOduration=2.80153368 podStartE2EDuration="5.224690977s" podCreationTimestamp="2026-01-29 17:00:21 +0000 UTC" firstStartedPulling="2026-01-29 17:00:23.16797495 +0000 UTC m=+1545.568559594" lastFinishedPulling="2026-01-29 17:00:25.591132217 +0000 UTC m=+1547.991716891" observedRunningTime="2026-01-29 17:00:26.215894116 +0000 UTC m=+1548.616478760" watchObservedRunningTime="2026-01-29 17:00:26.224690977 +0000 UTC m=+1548.625275651" Jan 29 17:00:32 crc kubenswrapper[4746]: I0129 17:00:32.311532 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:32 crc kubenswrapper[4746]: I0129 17:00:32.312251 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:32 crc kubenswrapper[4746]: I0129 17:00:32.357285 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:32 crc kubenswrapper[4746]: I0129 17:00:32.445520 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:00:32 crc kubenswrapper[4746]: E0129 17:00:32.445792 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:00:33 crc kubenswrapper[4746]: I0129 17:00:33.307908 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:33 crc kubenswrapper[4746]: I0129 17:00:33.350000 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wcng"] Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.282485 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wcng" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="registry-server" containerID="cri-o://b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2" gracePeriod=2 Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.727992 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.913648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-utilities\") pod \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.914319 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfzws\" (UniqueName: \"kubernetes.io/projected/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-kube-api-access-nfzws\") pod \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.914567 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-catalog-content\") pod \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\" (UID: \"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80\") " Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.915405 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-utilities" (OuterVolumeSpecName: "utilities") pod "f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" (UID: "f3e43a0a-0210-4cb4-be5f-e0121b0e1b80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.920293 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-kube-api-access-nfzws" (OuterVolumeSpecName: "kube-api-access-nfzws") pod "f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" (UID: "f3e43a0a-0210-4cb4-be5f-e0121b0e1b80"). InnerVolumeSpecName "kube-api-access-nfzws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:35 crc kubenswrapper[4746]: I0129 17:00:35.978904 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" (UID: "f3e43a0a-0210-4cb4-be5f-e0121b0e1b80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.016028 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfzws\" (UniqueName: \"kubernetes.io/projected/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-kube-api-access-nfzws\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.016085 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.016106 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.294093 4746 generic.go:334] "Generic (PLEG): container finished" podID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerID="b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2" exitCode=0 Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.294142 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcng" event={"ID":"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80","Type":"ContainerDied","Data":"b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2"} Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.294177 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wcng" event={"ID":"f3e43a0a-0210-4cb4-be5f-e0121b0e1b80","Type":"ContainerDied","Data":"e4bd93ff3bdb792e68ef2334a6c12b2300f3f67bf98b5dc4252bc8ff83d1ff3a"} Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.294215 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wcng" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.294226 4746 scope.go:117] "RemoveContainer" containerID="b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.319340 4746 scope.go:117] "RemoveContainer" containerID="c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.350621 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wcng"] Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.351363 4746 scope.go:117] "RemoveContainer" containerID="9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.361934 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wcng"] Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.386238 4746 scope.go:117] "RemoveContainer" containerID="b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2" Jan 29 17:00:36 crc kubenswrapper[4746]: E0129 17:00:36.386670 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2\": container with ID starting with b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2 not found: ID does not exist" containerID="b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.386712 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2"} err="failed to get container status \"b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2\": rpc error: code = NotFound desc = could not find container \"b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2\": container with ID starting with b233e8fbb22e4c32b90fa21f98a4c2e04f03fb187dbda480171054dcfdbb38a2 not found: ID does not exist" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.386736 4746 scope.go:117] "RemoveContainer" containerID="c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91" Jan 29 17:00:36 crc kubenswrapper[4746]: E0129 17:00:36.387298 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91\": container with ID starting with c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91 not found: ID does not exist" containerID="c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.387395 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91"} err="failed to get container status \"c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91\": rpc error: code = NotFound desc = could not find container \"c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91\": container with ID starting with c760878dc32464150f8dfd8d60cccb79e37f55aef80a6f2602444f24f7c29e91 not found: ID does not exist" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.387428 4746 scope.go:117] "RemoveContainer" containerID="9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103" Jan 29 17:00:36 crc kubenswrapper[4746]: E0129 17:00:36.387797 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103\": container with ID starting with 9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103 not found: ID does not exist" containerID="9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.387848 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103"} err="failed to get container status \"9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103\": rpc error: code = NotFound desc = could not find container \"9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103\": container with ID starting with 9668d6bc8cf7e6a01dc832a6355bed97bb166b2613832399b901b75a363ed103 not found: ID does not exist" Jan 29 17:00:36 crc kubenswrapper[4746]: I0129 17:00:36.455404 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" path="/var/lib/kubelet/pods/f3e43a0a-0210-4cb4-be5f-e0121b0e1b80/volumes" Jan 29 17:00:45 crc kubenswrapper[4746]: I0129 17:00:45.446237 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:00:45 crc kubenswrapper[4746]: E0129 17:00:45.447021 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:00:58 crc kubenswrapper[4746]: I0129 17:00:58.450333 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:00:58 crc kubenswrapper[4746]: E0129 17:00:58.451062 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.330009 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-24w5p"] Jan 29 17:01:00 crc kubenswrapper[4746]: E0129 17:01:00.330649 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="extract-utilities" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.330666 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="extract-utilities" Jan 29 17:01:00 crc kubenswrapper[4746]: E0129 17:01:00.330680 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="registry-server" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.330688 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="registry-server" Jan 29 17:01:00 crc kubenswrapper[4746]: E0129 17:01:00.330705 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="extract-content" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.330714 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="extract-content" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.330876 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e43a0a-0210-4cb4-be5f-e0121b0e1b80" containerName="registry-server" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.331794 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.344070 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24w5p"] Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.444126 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzb72\" (UniqueName: \"kubernetes.io/projected/37090e4d-72d2-4618-afad-d52105d6b7fc-kube-api-access-wzb72\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.444203 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-utilities\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.444251 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-catalog-content\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.545506 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzb72\" (UniqueName: \"kubernetes.io/projected/37090e4d-72d2-4618-afad-d52105d6b7fc-kube-api-access-wzb72\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.545557 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-utilities\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.545596 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-catalog-content\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.546005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-catalog-content\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.546103 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-utilities\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.566052 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzb72\" (UniqueName: \"kubernetes.io/projected/37090e4d-72d2-4618-afad-d52105d6b7fc-kube-api-access-wzb72\") pod \"redhat-marketplace-24w5p\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:00 crc kubenswrapper[4746]: I0129 17:01:00.650967 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:01 crc kubenswrapper[4746]: I0129 17:01:01.079425 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24w5p"] Jan 29 17:01:01 crc kubenswrapper[4746]: I0129 17:01:01.517092 4746 generic.go:334] "Generic (PLEG): container finished" podID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerID="ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d" exitCode=0 Jan 29 17:01:01 crc kubenswrapper[4746]: I0129 17:01:01.517147 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24w5p" event={"ID":"37090e4d-72d2-4618-afad-d52105d6b7fc","Type":"ContainerDied","Data":"ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d"} Jan 29 17:01:01 crc kubenswrapper[4746]: I0129 17:01:01.517379 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24w5p" event={"ID":"37090e4d-72d2-4618-afad-d52105d6b7fc","Type":"ContainerStarted","Data":"571ebd4443aade2ee40b52280ec33da3461af4ca916aa2b41f3fd6760d756fa3"} Jan 29 17:01:03 crc kubenswrapper[4746]: I0129 17:01:03.533920 4746 generic.go:334] "Generic (PLEG): container finished" podID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerID="c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae" exitCode=0 Jan 29 17:01:03 crc kubenswrapper[4746]: I0129 17:01:03.534134 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24w5p" event={"ID":"37090e4d-72d2-4618-afad-d52105d6b7fc","Type":"ContainerDied","Data":"c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae"} Jan 29 17:01:04 crc kubenswrapper[4746]: I0129 17:01:04.543546 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24w5p" event={"ID":"37090e4d-72d2-4618-afad-d52105d6b7fc","Type":"ContainerStarted","Data":"3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a"} Jan 29 17:01:05 crc kubenswrapper[4746]: I0129 17:01:05.571805 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-24w5p" podStartSLOduration=2.8743322190000002 podStartE2EDuration="5.571790011s" podCreationTimestamp="2026-01-29 17:01:00 +0000 UTC" firstStartedPulling="2026-01-29 17:01:01.518511736 +0000 UTC m=+1583.919096380" lastFinishedPulling="2026-01-29 17:01:04.215969528 +0000 UTC m=+1586.616554172" observedRunningTime="2026-01-29 17:01:05.569568164 +0000 UTC m=+1587.970152808" watchObservedRunningTime="2026-01-29 17:01:05.571790011 +0000 UTC m=+1587.972374655" Jan 29 17:01:09 crc kubenswrapper[4746]: I0129 17:01:09.445873 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:01:09 crc kubenswrapper[4746]: E0129 17:01:09.446694 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:01:10 crc kubenswrapper[4746]: I0129 17:01:10.651013 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:10 crc kubenswrapper[4746]: I0129 17:01:10.651066 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:10 crc kubenswrapper[4746]: I0129 17:01:10.701766 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:11 crc kubenswrapper[4746]: I0129 17:01:11.635028 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:11 crc kubenswrapper[4746]: I0129 17:01:11.681135 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24w5p"] Jan 29 17:01:13 crc kubenswrapper[4746]: I0129 17:01:13.613595 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-24w5p" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="registry-server" containerID="cri-o://3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a" gracePeriod=2 Jan 29 17:01:13 crc kubenswrapper[4746]: I0129 17:01:13.985861 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.030831 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-utilities\") pod \"37090e4d-72d2-4618-afad-d52105d6b7fc\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.030919 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-catalog-content\") pod \"37090e4d-72d2-4618-afad-d52105d6b7fc\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.031004 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzb72\" (UniqueName: \"kubernetes.io/projected/37090e4d-72d2-4618-afad-d52105d6b7fc-kube-api-access-wzb72\") pod \"37090e4d-72d2-4618-afad-d52105d6b7fc\" (UID: \"37090e4d-72d2-4618-afad-d52105d6b7fc\") " Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.033245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-utilities" (OuterVolumeSpecName: "utilities") pod "37090e4d-72d2-4618-afad-d52105d6b7fc" (UID: "37090e4d-72d2-4618-afad-d52105d6b7fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.036862 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37090e4d-72d2-4618-afad-d52105d6b7fc-kube-api-access-wzb72" (OuterVolumeSpecName: "kube-api-access-wzb72") pod "37090e4d-72d2-4618-afad-d52105d6b7fc" (UID: "37090e4d-72d2-4618-afad-d52105d6b7fc"). InnerVolumeSpecName "kube-api-access-wzb72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.056617 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37090e4d-72d2-4618-afad-d52105d6b7fc" (UID: "37090e4d-72d2-4618-afad-d52105d6b7fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.132029 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzb72\" (UniqueName: \"kubernetes.io/projected/37090e4d-72d2-4618-afad-d52105d6b7fc-kube-api-access-wzb72\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.132062 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.132075 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37090e4d-72d2-4618-afad-d52105d6b7fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.622702 4746 generic.go:334] "Generic (PLEG): container finished" podID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerID="3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a" exitCode=0 Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.622733 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24w5p" event={"ID":"37090e4d-72d2-4618-afad-d52105d6b7fc","Type":"ContainerDied","Data":"3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a"} Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.623724 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24w5p" event={"ID":"37090e4d-72d2-4618-afad-d52105d6b7fc","Type":"ContainerDied","Data":"571ebd4443aade2ee40b52280ec33da3461af4ca916aa2b41f3fd6760d756fa3"} Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.622817 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24w5p" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.623784 4746 scope.go:117] "RemoveContainer" containerID="3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.644029 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24w5p"] Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.647793 4746 scope.go:117] "RemoveContainer" containerID="c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.649595 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-24w5p"] Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.670075 4746 scope.go:117] "RemoveContainer" containerID="ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.697723 4746 scope.go:117] "RemoveContainer" containerID="3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a" Jan 29 17:01:14 crc kubenswrapper[4746]: E0129 17:01:14.698290 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a\": container with ID starting with 3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a not found: ID does not exist" containerID="3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.698337 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a"} err="failed to get container status \"3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a\": rpc error: code = NotFound desc = could not find container \"3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a\": container with ID starting with 3dee446a9704ae48d5b2f64450f902f4c6a754732e9019b07fced5154a7fa64a not found: ID does not exist" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.698366 4746 scope.go:117] "RemoveContainer" containerID="c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae" Jan 29 17:01:14 crc kubenswrapper[4746]: E0129 17:01:14.698798 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae\": container with ID starting with c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae not found: ID does not exist" containerID="c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.698845 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae"} err="failed to get container status \"c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae\": rpc error: code = NotFound desc = could not find container \"c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae\": container with ID starting with c2b98da96440da35e2fac15d0091edd6f862bbfc1c31ce49cbf3b131e60a67ae not found: ID does not exist" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.698878 4746 scope.go:117] "RemoveContainer" containerID="ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d" Jan 29 17:01:14 crc kubenswrapper[4746]: E0129 17:01:14.699294 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d\": container with ID starting with ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d not found: ID does not exist" containerID="ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d" Jan 29 17:01:14 crc kubenswrapper[4746]: I0129 17:01:14.699338 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d"} err="failed to get container status \"ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d\": rpc error: code = NotFound desc = could not find container \"ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d\": container with ID starting with ce036a03f6d34e4f30dd982761020a62cb1971ad0cd1986d1bdf3da0231af19d not found: ID does not exist" Jan 29 17:01:16 crc kubenswrapper[4746]: I0129 17:01:16.456650 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" path="/var/lib/kubelet/pods/37090e4d-72d2-4618-afad-d52105d6b7fc/volumes" Jan 29 17:01:20 crc kubenswrapper[4746]: I0129 17:01:20.445919 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:01:20 crc kubenswrapper[4746]: E0129 17:01:20.447831 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:01:31 crc kubenswrapper[4746]: I0129 17:01:31.446626 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:01:31 crc kubenswrapper[4746]: E0129 17:01:31.448575 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.474532 4746 scope.go:117] "RemoveContainer" containerID="2d5aee9a083acfc810858a7c87db20c7c5b3dafb9632e60480711cbea239dba1" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.494848 4746 scope.go:117] "RemoveContainer" containerID="0c10cc49ecb618eb08c28b96af93caf437f2eab603b005ba55fa890df2e8cb3d" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.518500 4746 scope.go:117] "RemoveContainer" containerID="f20c1208bb170c0dc12ec84c9358d47475e98d721184240642696df4d5199cc4" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.542693 4746 scope.go:117] "RemoveContainer" containerID="27f8653b06a0ddb1325cd3a04654b678b389b70a18422b7030b7d50e299dd4c3" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.570761 4746 scope.go:117] "RemoveContainer" containerID="5b1e351f12ff9822899af90b93ad119157ebdcc79e119352d35d3f52ab18cf79" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.612487 4746 scope.go:117] "RemoveContainer" containerID="91fce55c9d75c1b331d8bd42c9897a8976e8ccd42870a105712562f5ecc517d2" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.643556 4746 scope.go:117] "RemoveContainer" containerID="ea8f470075d65d280e96ac2d25ee771c7eb9e3d5af76de3a2e471ff31e55e67f" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.664181 4746 scope.go:117] "RemoveContainer" containerID="3376338c9ce4227a9c44f1784e6769778b27bd95c1b827647fc32a1f0b5f511b" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.685216 4746 scope.go:117] "RemoveContainer" containerID="79e1251fb71ce11a43cf32e7a28779e697303a7079a8d785c6ab9099c472b0a2" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.703596 4746 scope.go:117] "RemoveContainer" containerID="e9cbcfc12427a81b6e4b347a716f72f523dcadd22cde17f7da2bf401f66972c7" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.721223 4746 scope.go:117] "RemoveContainer" containerID="ad4b6ab3285c9071345dd17ada713cdadb52fd39a2d489befc05fc5b022fff09" Jan 29 17:01:40 crc kubenswrapper[4746]: I0129 17:01:40.750714 4746 scope.go:117] "RemoveContainer" containerID="35670f9a01e378fa8f461a089914897b236fe45d29b761ce22819d6a16d6a248" Jan 29 17:01:45 crc kubenswrapper[4746]: I0129 17:01:45.445125 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:01:45 crc kubenswrapper[4746]: E0129 17:01:45.445842 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:01:59 crc kubenswrapper[4746]: I0129 17:01:59.445284 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:01:59 crc kubenswrapper[4746]: E0129 17:01:59.446374 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:02:11 crc kubenswrapper[4746]: I0129 17:02:11.446946 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:02:11 crc kubenswrapper[4746]: E0129 17:02:11.449891 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:02:26 crc kubenswrapper[4746]: I0129 17:02:26.445261 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:02:26 crc kubenswrapper[4746]: E0129 17:02:26.447722 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:02:37 crc kubenswrapper[4746]: I0129 17:02:37.445601 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:02:37 crc kubenswrapper[4746]: E0129 17:02:37.446262 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:02:40 crc kubenswrapper[4746]: I0129 17:02:40.934072 4746 scope.go:117] "RemoveContainer" containerID="c78b0cc4c733ab33d81ae04bcb4447f430f04b3f564487ae982eadf1d345566d" Jan 29 17:02:40 crc kubenswrapper[4746]: I0129 17:02:40.958781 4746 scope.go:117] "RemoveContainer" containerID="0eca1ecffcb158ea772d21b15b7870aa8539dd40c4ec7be1285ce85180bf1e8a" Jan 29 17:02:40 crc kubenswrapper[4746]: I0129 17:02:40.998321 4746 scope.go:117] "RemoveContainer" containerID="46d73f84ade9cde1a2d80ad053100a17dec9500ab890a05557340b382b890e39" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.017016 4746 scope.go:117] "RemoveContainer" containerID="bda6ef59fe6c6aa36650accec8c47f1fb248a6d1176f6aed98b5503facb4cdb6" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.042028 4746 scope.go:117] "RemoveContainer" containerID="a10fd66061f7bcabb2d53d2602e9c073d65b49b0cbcf8a3376eb2de3bc4e75af" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.079387 4746 scope.go:117] "RemoveContainer" containerID="06338e398c5c955cb41a316a04585286b7feffc6ce84ecf5e5ca0fbabeb65c4c" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.098802 4746 scope.go:117] "RemoveContainer" containerID="1b15c59d49be889ef5cf6ad8a226bd7e4a9df62c00fdb98d1c98f69a191e1541" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.128525 4746 scope.go:117] "RemoveContainer" containerID="bb90355332813dd85cb7f7decec6421abdc591007933bd11bbc0f650d9a5034b" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.160710 4746 scope.go:117] "RemoveContainer" containerID="1fcd0fc16e0dc4d896486171c419af575bafdec450638ee76d77646a35a6e962" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.192878 4746 scope.go:117] "RemoveContainer" containerID="247ac07987850938ef89b14311ebd44b3cedeffae516773cfd9ba11573533376" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.207361 4746 scope.go:117] "RemoveContainer" containerID="564993abe23f748bcad00bc227395a3a07b6f9bffbb87815e97f254c228f5be2" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.235632 4746 scope.go:117] "RemoveContainer" containerID="2274a1f7eecc79da00178b91883fca776d9b8582250496e9800b7b5bdfcb84ba" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.260386 4746 scope.go:117] "RemoveContainer" containerID="633a8d8450a2c8efba7958505172f6f8ef9a64dcfdd943bce08046cda4c7b216" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.285892 4746 scope.go:117] "RemoveContainer" containerID="fb937fb01141dd73f6a7ebd7e0fdab6b206a2dc1e58ed1435f1143de26bf2408" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.315578 4746 scope.go:117] "RemoveContainer" containerID="ee13c42a317c4342e1606cf6ab0b6c008e98b7b4b966ff9748f1e45ad9609fae" Jan 29 17:02:41 crc kubenswrapper[4746]: I0129 17:02:41.344609 4746 scope.go:117] "RemoveContainer" containerID="0419721f2f0caa11c5b08fdd1ccc608f02549b1c0858e7d7e528265c7a907743" Jan 29 17:02:50 crc kubenswrapper[4746]: I0129 17:02:50.445560 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:02:50 crc kubenswrapper[4746]: E0129 17:02:50.446205 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:03:04 crc kubenswrapper[4746]: I0129 17:03:04.445779 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:03:04 crc kubenswrapper[4746]: E0129 17:03:04.446422 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:03:18 crc kubenswrapper[4746]: I0129 17:03:18.451610 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:03:18 crc kubenswrapper[4746]: E0129 17:03:18.453753 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:03:31 crc kubenswrapper[4746]: I0129 17:03:31.446991 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:03:31 crc kubenswrapper[4746]: E0129 17:03:31.447888 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.593871 4746 scope.go:117] "RemoveContainer" containerID="1ecc4b9d71375f4912841c0ce64ae3b86930bc5b0cdf26d905fb011a69e81a3f" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.614544 4746 scope.go:117] "RemoveContainer" containerID="ffe4f88f98c0c616c8a6607cb72e6acd7cdee0142ea8746e929924d4801cbfca" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.639521 4746 scope.go:117] "RemoveContainer" containerID="676b786471a6f2475f210fe837e5982651b21b491df519ba3459ee1c6a079bf1" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.663777 4746 scope.go:117] "RemoveContainer" containerID="2c68e474ae68ccd262214e30cfd3e1d88e25431121c450bb429bf23bb47d050a" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.682345 4746 scope.go:117] "RemoveContainer" containerID="8570c70a880e99072977cb4e1698d7dd3b7ba1f3aac7236951149c68e8cd523d" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.717824 4746 scope.go:117] "RemoveContainer" containerID="1c07233ce1d10220cf97e784147f808ac75d4b0881f9c9f6a83233ede2ff6a31" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.732579 4746 scope.go:117] "RemoveContainer" containerID="021c90f39cc987692e39d8960c72a480e84d52e1479dba6e30fa872f71b14e33" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.786656 4746 scope.go:117] "RemoveContainer" containerID="5a97250572ad990f099c81e7ee46a00c3f12562feabfb5aa66086e13ecd618cc" Jan 29 17:03:41 crc kubenswrapper[4746]: I0129 17:03:41.824615 4746 scope.go:117] "RemoveContainer" containerID="b9ad39947cac608b67c1042a6d2058a56f2f61b58c5c87e8da33d420616856ec" Jan 29 17:03:42 crc kubenswrapper[4746]: I0129 17:03:42.445460 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:03:42 crc kubenswrapper[4746]: E0129 17:03:42.445825 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:03:56 crc kubenswrapper[4746]: I0129 17:03:56.445362 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:03:56 crc kubenswrapper[4746]: E0129 17:03:56.446062 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:04:10 crc kubenswrapper[4746]: I0129 17:04:10.445964 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:04:10 crc kubenswrapper[4746]: E0129 17:04:10.446618 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:04:22 crc kubenswrapper[4746]: I0129 17:04:22.446321 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:04:22 crc kubenswrapper[4746]: E0129 17:04:22.447382 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:04:36 crc kubenswrapper[4746]: I0129 17:04:36.445531 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:04:36 crc kubenswrapper[4746]: E0129 17:04:36.446263 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:04:42 crc kubenswrapper[4746]: I0129 17:04:42.033879 4746 scope.go:117] "RemoveContainer" containerID="4509b6065caa050a2798bb51537627795e31890f52e084af90edd17f47baad05" Jan 29 17:04:42 crc kubenswrapper[4746]: I0129 17:04:42.056945 4746 scope.go:117] "RemoveContainer" containerID="8ccbc80a4f44bd3867a165ceb05bc702002c328fc89c8be74ef7c2b80a85893d" Jan 29 17:04:42 crc kubenswrapper[4746]: I0129 17:04:42.074016 4746 scope.go:117] "RemoveContainer" containerID="c8656546478de17c0899fda5f6e66807296b60fedfeef0a8f00e903f198cf6ea" Jan 29 17:04:42 crc kubenswrapper[4746]: I0129 17:04:42.094466 4746 scope.go:117] "RemoveContainer" containerID="a6ca57c1b1427d4152c2d3d29d17abec1ff2930930f94171eb6a4832a28e0ff4" Jan 29 17:04:42 crc kubenswrapper[4746]: I0129 17:04:42.120569 4746 scope.go:117] "RemoveContainer" containerID="5621af2d5539e35318ee2f2c35d249a21df7981eeee1f5046ada00d4056a1baa" Jan 29 17:04:42 crc kubenswrapper[4746]: I0129 17:04:42.152902 4746 scope.go:117] "RemoveContainer" containerID="ae31072ec8addf54fcf59db27f019a8ca754b118d7d33ba75d44337ae689a8b2" Jan 29 17:04:50 crc kubenswrapper[4746]: I0129 17:04:50.446341 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:04:51 crc kubenswrapper[4746]: I0129 17:04:51.261715 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"d4fc97495a84f73dd095ae206ffc3a9940b143fd816363f590921c357df35fb7"} Jan 29 17:05:42 crc kubenswrapper[4746]: I0129 17:05:42.237756 4746 scope.go:117] "RemoveContainer" containerID="f5dbc0994f4e33f3d35d508e2ee9e277a69d60f776de81a42fb9ff89c6a2d705" Jan 29 17:05:42 crc kubenswrapper[4746]: I0129 17:05:42.255549 4746 scope.go:117] "RemoveContainer" containerID="1198023c41c80e6dc1d51dd8e6370ba603f52d4acbb49ed3121e75fbb0054834" Jan 29 17:05:42 crc kubenswrapper[4746]: I0129 17:05:42.283573 4746 scope.go:117] "RemoveContainer" containerID="90d0f7c0ec8bee68f1032e1115bb3957e1cc29de95dedf8075f362d0b3ca5802" Jan 29 17:07:19 crc kubenswrapper[4746]: I0129 17:07:19.065549 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:07:19 crc kubenswrapper[4746]: I0129 17:07:19.066152 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.739541 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6dwwg"] Jan 29 17:07:34 crc kubenswrapper[4746]: E0129 17:07:34.740418 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="registry-server" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.740435 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="registry-server" Jan 29 17:07:34 crc kubenswrapper[4746]: E0129 17:07:34.740455 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="extract-content" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.740463 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="extract-content" Jan 29 17:07:34 crc kubenswrapper[4746]: E0129 17:07:34.740488 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="extract-utilities" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.740496 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="extract-utilities" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.740636 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="37090e4d-72d2-4618-afad-d52105d6b7fc" containerName="registry-server" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.741703 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.763073 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dwwg"] Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.856667 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-catalog-content\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.856736 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-utilities\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.856773 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9klvd\" (UniqueName: \"kubernetes.io/projected/9304da95-7ce1-4862-96e2-3fe1777353f9-kube-api-access-9klvd\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.957863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-utilities\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.957931 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9klvd\" (UniqueName: \"kubernetes.io/projected/9304da95-7ce1-4862-96e2-3fe1777353f9-kube-api-access-9klvd\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.958065 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-catalog-content\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.958522 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-utilities\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.958573 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-catalog-content\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:34 crc kubenswrapper[4746]: I0129 17:07:34.984065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9klvd\" (UniqueName: \"kubernetes.io/projected/9304da95-7ce1-4862-96e2-3fe1777353f9-kube-api-access-9klvd\") pod \"certified-operators-6dwwg\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:35 crc kubenswrapper[4746]: I0129 17:07:35.062136 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:35 crc kubenswrapper[4746]: I0129 17:07:35.340930 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dwwg"] Jan 29 17:07:35 crc kubenswrapper[4746]: I0129 17:07:35.434298 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dwwg" event={"ID":"9304da95-7ce1-4862-96e2-3fe1777353f9","Type":"ContainerStarted","Data":"d73ef86a772dcbcfe388de5d2ec6e4f19b4a9f55c1315afa47c1e8137047758d"} Jan 29 17:07:36 crc kubenswrapper[4746]: I0129 17:07:36.442239 4746 generic.go:334] "Generic (PLEG): container finished" podID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerID="7347a020662e78d2697ddff6d699c27a166831629fe4f4a2a2dfab3a7e675a85" exitCode=0 Jan 29 17:07:36 crc kubenswrapper[4746]: I0129 17:07:36.442297 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dwwg" event={"ID":"9304da95-7ce1-4862-96e2-3fe1777353f9","Type":"ContainerDied","Data":"7347a020662e78d2697ddff6d699c27a166831629fe4f4a2a2dfab3a7e675a85"} Jan 29 17:07:36 crc kubenswrapper[4746]: I0129 17:07:36.446315 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:07:38 crc kubenswrapper[4746]: I0129 17:07:38.474993 4746 generic.go:334] "Generic (PLEG): container finished" podID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerID="b9291ae6bca5eece9559e692b5e1d5cb7644341342609d44cdbef56da8adb335" exitCode=0 Jan 29 17:07:38 crc kubenswrapper[4746]: I0129 17:07:38.475133 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dwwg" event={"ID":"9304da95-7ce1-4862-96e2-3fe1777353f9","Type":"ContainerDied","Data":"b9291ae6bca5eece9559e692b5e1d5cb7644341342609d44cdbef56da8adb335"} Jan 29 17:07:39 crc kubenswrapper[4746]: I0129 17:07:39.484046 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dwwg" event={"ID":"9304da95-7ce1-4862-96e2-3fe1777353f9","Type":"ContainerStarted","Data":"2b9bb64042e658315619e15cff6ed54cee1cd219cabeb4564f9fb55faee0e00d"} Jan 29 17:07:39 crc kubenswrapper[4746]: I0129 17:07:39.507391 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6dwwg" podStartSLOduration=3.093626509 podStartE2EDuration="5.507370302s" podCreationTimestamp="2026-01-29 17:07:34 +0000 UTC" firstStartedPulling="2026-01-29 17:07:36.446063357 +0000 UTC m=+1978.846648001" lastFinishedPulling="2026-01-29 17:07:38.85980715 +0000 UTC m=+1981.260391794" observedRunningTime="2026-01-29 17:07:39.502532901 +0000 UTC m=+1981.903117535" watchObservedRunningTime="2026-01-29 17:07:39.507370302 +0000 UTC m=+1981.907954946" Jan 29 17:07:45 crc kubenswrapper[4746]: I0129 17:07:45.062991 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:45 crc kubenswrapper[4746]: I0129 17:07:45.063461 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:45 crc kubenswrapper[4746]: I0129 17:07:45.101730 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:45 crc kubenswrapper[4746]: I0129 17:07:45.571567 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:47 crc kubenswrapper[4746]: I0129 17:07:47.926600 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6dwwg"] Jan 29 17:07:47 crc kubenswrapper[4746]: I0129 17:07:47.927058 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6dwwg" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="registry-server" containerID="cri-o://2b9bb64042e658315619e15cff6ed54cee1cd219cabeb4564f9fb55faee0e00d" gracePeriod=2 Jan 29 17:07:49 crc kubenswrapper[4746]: I0129 17:07:49.065071 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:07:49 crc kubenswrapper[4746]: I0129 17:07:49.065142 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:07:49 crc kubenswrapper[4746]: I0129 17:07:49.560767 4746 generic.go:334] "Generic (PLEG): container finished" podID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerID="2b9bb64042e658315619e15cff6ed54cee1cd219cabeb4564f9fb55faee0e00d" exitCode=0 Jan 29 17:07:49 crc kubenswrapper[4746]: I0129 17:07:49.560863 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dwwg" event={"ID":"9304da95-7ce1-4862-96e2-3fe1777353f9","Type":"ContainerDied","Data":"2b9bb64042e658315619e15cff6ed54cee1cd219cabeb4564f9fb55faee0e00d"} Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.552863 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.569332 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dwwg" event={"ID":"9304da95-7ce1-4862-96e2-3fe1777353f9","Type":"ContainerDied","Data":"d73ef86a772dcbcfe388de5d2ec6e4f19b4a9f55c1315afa47c1e8137047758d"} Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.569391 4746 scope.go:117] "RemoveContainer" containerID="2b9bb64042e658315619e15cff6ed54cee1cd219cabeb4564f9fb55faee0e00d" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.569919 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dwwg" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.607160 4746 scope.go:117] "RemoveContainer" containerID="b9291ae6bca5eece9559e692b5e1d5cb7644341342609d44cdbef56da8adb335" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.625553 4746 scope.go:117] "RemoveContainer" containerID="7347a020662e78d2697ddff6d699c27a166831629fe4f4a2a2dfab3a7e675a85" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.693597 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-catalog-content\") pod \"9304da95-7ce1-4862-96e2-3fe1777353f9\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.694097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9klvd\" (UniqueName: \"kubernetes.io/projected/9304da95-7ce1-4862-96e2-3fe1777353f9-kube-api-access-9klvd\") pod \"9304da95-7ce1-4862-96e2-3fe1777353f9\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.694225 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-utilities\") pod \"9304da95-7ce1-4862-96e2-3fe1777353f9\" (UID: \"9304da95-7ce1-4862-96e2-3fe1777353f9\") " Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.695148 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-utilities" (OuterVolumeSpecName: "utilities") pod "9304da95-7ce1-4862-96e2-3fe1777353f9" (UID: "9304da95-7ce1-4862-96e2-3fe1777353f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.700227 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9304da95-7ce1-4862-96e2-3fe1777353f9-kube-api-access-9klvd" (OuterVolumeSpecName: "kube-api-access-9klvd") pod "9304da95-7ce1-4862-96e2-3fe1777353f9" (UID: "9304da95-7ce1-4862-96e2-3fe1777353f9"). InnerVolumeSpecName "kube-api-access-9klvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.742105 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9304da95-7ce1-4862-96e2-3fe1777353f9" (UID: "9304da95-7ce1-4862-96e2-3fe1777353f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.795646 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.795682 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9klvd\" (UniqueName: \"kubernetes.io/projected/9304da95-7ce1-4862-96e2-3fe1777353f9-kube-api-access-9klvd\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.795696 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9304da95-7ce1-4862-96e2-3fe1777353f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.905872 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6dwwg"] Jan 29 17:07:50 crc kubenswrapper[4746]: I0129 17:07:50.912620 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6dwwg"] Jan 29 17:07:52 crc kubenswrapper[4746]: I0129 17:07:52.455045 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" path="/var/lib/kubelet/pods/9304da95-7ce1-4862-96e2-3fe1777353f9/volumes" Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.065834 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.066950 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.067044 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.068370 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4fc97495a84f73dd095ae206ffc3a9940b143fd816363f590921c357df35fb7"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.068508 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://d4fc97495a84f73dd095ae206ffc3a9940b143fd816363f590921c357df35fb7" gracePeriod=600 Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.773128 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="d4fc97495a84f73dd095ae206ffc3a9940b143fd816363f590921c357df35fb7" exitCode=0 Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.773172 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"d4fc97495a84f73dd095ae206ffc3a9940b143fd816363f590921c357df35fb7"} Jan 29 17:08:19 crc kubenswrapper[4746]: I0129 17:08:19.773404 4746 scope.go:117] "RemoveContainer" containerID="4985f4ae9b383f8fbe5e66a01f7c2d31e541b18dc1da060bc6c8eddd44c2f156" Jan 29 17:08:20 crc kubenswrapper[4746]: I0129 17:08:20.785034 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7"} Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.809850 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p46bm"] Jan 29 17:08:57 crc kubenswrapper[4746]: E0129 17:08:57.810643 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="extract-utilities" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.810655 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="extract-utilities" Jan 29 17:08:57 crc kubenswrapper[4746]: E0129 17:08:57.810670 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="registry-server" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.810676 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="registry-server" Jan 29 17:08:57 crc kubenswrapper[4746]: E0129 17:08:57.810695 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="extract-content" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.810701 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="extract-content" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.810828 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9304da95-7ce1-4862-96e2-3fe1777353f9" containerName="registry-server" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.811806 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.827270 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p46bm"] Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.961175 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-utilities\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.961389 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtx6\" (UniqueName: \"kubernetes.io/projected/f4773fec-634a-4eac-9271-b81157e2d3db-kube-api-access-smtx6\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:57 crc kubenswrapper[4746]: I0129 17:08:57.961460 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-catalog-content\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.062898 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtx6\" (UniqueName: \"kubernetes.io/projected/f4773fec-634a-4eac-9271-b81157e2d3db-kube-api-access-smtx6\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.062968 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-catalog-content\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.063001 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-utilities\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.063452 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-utilities\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.063577 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-catalog-content\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.081585 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtx6\" (UniqueName: \"kubernetes.io/projected/f4773fec-634a-4eac-9271-b81157e2d3db-kube-api-access-smtx6\") pod \"redhat-operators-p46bm\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.131800 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:08:58 crc kubenswrapper[4746]: I0129 17:08:58.599587 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p46bm"] Jan 29 17:08:59 crc kubenswrapper[4746]: I0129 17:08:59.102621 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4773fec-634a-4eac-9271-b81157e2d3db" containerID="027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f" exitCode=0 Jan 29 17:08:59 crc kubenswrapper[4746]: I0129 17:08:59.102670 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerDied","Data":"027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f"} Jan 29 17:08:59 crc kubenswrapper[4746]: I0129 17:08:59.102718 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerStarted","Data":"215fd75bec1e7fb57019ab84c4b1170f9a19b4beea6a0c116f1a432093f527fe"} Jan 29 17:09:01 crc kubenswrapper[4746]: I0129 17:09:01.117424 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerStarted","Data":"b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb"} Jan 29 17:09:02 crc kubenswrapper[4746]: I0129 17:09:02.127284 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4773fec-634a-4eac-9271-b81157e2d3db" containerID="b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb" exitCode=0 Jan 29 17:09:02 crc kubenswrapper[4746]: I0129 17:09:02.127358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerDied","Data":"b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb"} Jan 29 17:09:03 crc kubenswrapper[4746]: I0129 17:09:03.136893 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerStarted","Data":"96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5"} Jan 29 17:09:03 crc kubenswrapper[4746]: I0129 17:09:03.158062 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p46bm" podStartSLOduration=2.700054099 podStartE2EDuration="6.158045384s" podCreationTimestamp="2026-01-29 17:08:57 +0000 UTC" firstStartedPulling="2026-01-29 17:08:59.104343951 +0000 UTC m=+2061.504928605" lastFinishedPulling="2026-01-29 17:09:02.562335246 +0000 UTC m=+2064.962919890" observedRunningTime="2026-01-29 17:09:03.152597035 +0000 UTC m=+2065.553181680" watchObservedRunningTime="2026-01-29 17:09:03.158045384 +0000 UTC m=+2065.558630028" Jan 29 17:09:08 crc kubenswrapper[4746]: I0129 17:09:08.132099 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:09:08 crc kubenswrapper[4746]: I0129 17:09:08.132723 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:09:08 crc kubenswrapper[4746]: I0129 17:09:08.170868 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:09:08 crc kubenswrapper[4746]: I0129 17:09:08.209731 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:09:08 crc kubenswrapper[4746]: I0129 17:09:08.417238 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p46bm"] Jan 29 17:09:10 crc kubenswrapper[4746]: I0129 17:09:10.179433 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p46bm" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="registry-server" containerID="cri-o://96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5" gracePeriod=2 Jan 29 17:09:11 crc kubenswrapper[4746]: I0129 17:09:11.940273 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:09:11 crc kubenswrapper[4746]: I0129 17:09:11.955865 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-catalog-content\") pod \"f4773fec-634a-4eac-9271-b81157e2d3db\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " Jan 29 17:09:11 crc kubenswrapper[4746]: I0129 17:09:11.955931 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-utilities\") pod \"f4773fec-634a-4eac-9271-b81157e2d3db\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " Jan 29 17:09:11 crc kubenswrapper[4746]: I0129 17:09:11.955966 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smtx6\" (UniqueName: \"kubernetes.io/projected/f4773fec-634a-4eac-9271-b81157e2d3db-kube-api-access-smtx6\") pod \"f4773fec-634a-4eac-9271-b81157e2d3db\" (UID: \"f4773fec-634a-4eac-9271-b81157e2d3db\") " Jan 29 17:09:11 crc kubenswrapper[4746]: I0129 17:09:11.960142 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-utilities" (OuterVolumeSpecName: "utilities") pod "f4773fec-634a-4eac-9271-b81157e2d3db" (UID: "f4773fec-634a-4eac-9271-b81157e2d3db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:11 crc kubenswrapper[4746]: I0129 17:09:11.963451 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4773fec-634a-4eac-9271-b81157e2d3db-kube-api-access-smtx6" (OuterVolumeSpecName: "kube-api-access-smtx6") pod "f4773fec-634a-4eac-9271-b81157e2d3db" (UID: "f4773fec-634a-4eac-9271-b81157e2d3db"). InnerVolumeSpecName "kube-api-access-smtx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.057596 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.057704 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smtx6\" (UniqueName: \"kubernetes.io/projected/f4773fec-634a-4eac-9271-b81157e2d3db-kube-api-access-smtx6\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.105528 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4773fec-634a-4eac-9271-b81157e2d3db" (UID: "f4773fec-634a-4eac-9271-b81157e2d3db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.158296 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4773fec-634a-4eac-9271-b81157e2d3db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.193255 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4773fec-634a-4eac-9271-b81157e2d3db" containerID="96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5" exitCode=0 Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.193308 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerDied","Data":"96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5"} Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.193339 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p46bm" event={"ID":"f4773fec-634a-4eac-9271-b81157e2d3db","Type":"ContainerDied","Data":"215fd75bec1e7fb57019ab84c4b1170f9a19b4beea6a0c116f1a432093f527fe"} Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.193359 4746 scope.go:117] "RemoveContainer" containerID="96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.193378 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p46bm" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.218545 4746 scope.go:117] "RemoveContainer" containerID="b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.227456 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p46bm"] Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.233941 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p46bm"] Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.244921 4746 scope.go:117] "RemoveContainer" containerID="027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.266025 4746 scope.go:117] "RemoveContainer" containerID="96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5" Jan 29 17:09:12 crc kubenswrapper[4746]: E0129 17:09:12.266550 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5\": container with ID starting with 96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5 not found: ID does not exist" containerID="96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.266670 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5"} err="failed to get container status \"96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5\": rpc error: code = NotFound desc = could not find container \"96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5\": container with ID starting with 96c289179c7f413fcf385903fee67cc496477834100f16a0c33fc86107d4bdf5 not found: ID does not exist" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.266747 4746 scope.go:117] "RemoveContainer" containerID="b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb" Jan 29 17:09:12 crc kubenswrapper[4746]: E0129 17:09:12.267123 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb\": container with ID starting with b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb not found: ID does not exist" containerID="b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.267179 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb"} err="failed to get container status \"b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb\": rpc error: code = NotFound desc = could not find container \"b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb\": container with ID starting with b2f4f9314f7b3a961977181e85cbee40448ba28f16074176c52ff040c7e33deb not found: ID does not exist" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.267299 4746 scope.go:117] "RemoveContainer" containerID="027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f" Jan 29 17:09:12 crc kubenswrapper[4746]: E0129 17:09:12.267657 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f\": container with ID starting with 027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f not found: ID does not exist" containerID="027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.267722 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f"} err="failed to get container status \"027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f\": rpc error: code = NotFound desc = could not find container \"027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f\": container with ID starting with 027ef49f52a052ee33dd3c5516867662ef1995eaf4be63db5ec8bdbf3e14ef1f not found: ID does not exist" Jan 29 17:09:12 crc kubenswrapper[4746]: I0129 17:09:12.455217 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" path="/var/lib/kubelet/pods/f4773fec-634a-4eac-9271-b81157e2d3db/volumes" Jan 29 17:10:19 crc kubenswrapper[4746]: I0129 17:10:19.065311 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:10:19 crc kubenswrapper[4746]: I0129 17:10:19.065846 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:10:49 crc kubenswrapper[4746]: I0129 17:10:49.065336 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:10:49 crc kubenswrapper[4746]: I0129 17:10:49.065852 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:11:19 crc kubenswrapper[4746]: I0129 17:11:19.065436 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:11:19 crc kubenswrapper[4746]: I0129 17:11:19.066002 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:11:19 crc kubenswrapper[4746]: I0129 17:11:19.066045 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 17:11:19 crc kubenswrapper[4746]: I0129 17:11:19.066559 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:11:19 crc kubenswrapper[4746]: I0129 17:11:19.066622 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" gracePeriod=600 Jan 29 17:11:19 crc kubenswrapper[4746]: E0129 17:11:19.724718 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:11:20 crc kubenswrapper[4746]: I0129 17:11:20.071312 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" exitCode=0 Jan 29 17:11:20 crc kubenswrapper[4746]: I0129 17:11:20.071371 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7"} Jan 29 17:11:20 crc kubenswrapper[4746]: I0129 17:11:20.071423 4746 scope.go:117] "RemoveContainer" containerID="d4fc97495a84f73dd095ae206ffc3a9940b143fd816363f590921c357df35fb7" Jan 29 17:11:20 crc kubenswrapper[4746]: I0129 17:11:20.071905 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:11:20 crc kubenswrapper[4746]: E0129 17:11:20.072108 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:11:31 crc kubenswrapper[4746]: I0129 17:11:31.445821 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:11:31 crc kubenswrapper[4746]: E0129 17:11:31.446943 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:11:42 crc kubenswrapper[4746]: I0129 17:11:42.445894 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:11:42 crc kubenswrapper[4746]: E0129 17:11:42.446650 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.233355 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jv7l8"] Jan 29 17:11:50 crc kubenswrapper[4746]: E0129 17:11:50.235198 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="registry-server" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.235305 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="registry-server" Jan 29 17:11:50 crc kubenswrapper[4746]: E0129 17:11:50.235400 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="extract-utilities" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.235491 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="extract-utilities" Jan 29 17:11:50 crc kubenswrapper[4746]: E0129 17:11:50.235585 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="extract-content" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.235733 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="extract-content" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.236018 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4773fec-634a-4eac-9271-b81157e2d3db" containerName="registry-server" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.237169 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.247077 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jv7l8"] Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.437119 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4qz\" (UniqueName: \"kubernetes.io/projected/32b9d421-df3e-41ae-aed8-a90e40f19fff-kube-api-access-ks4qz\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.437202 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-catalog-content\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.437262 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-utilities\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.538705 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4qz\" (UniqueName: \"kubernetes.io/projected/32b9d421-df3e-41ae-aed8-a90e40f19fff-kube-api-access-ks4qz\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.538763 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-catalog-content\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.539289 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-utilities\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.539642 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-catalog-content\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.539730 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-utilities\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.559171 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4qz\" (UniqueName: \"kubernetes.io/projected/32b9d421-df3e-41ae-aed8-a90e40f19fff-kube-api-access-ks4qz\") pod \"redhat-marketplace-jv7l8\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:50 crc kubenswrapper[4746]: I0129 17:11:50.857708 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:11:51 crc kubenswrapper[4746]: I0129 17:11:51.247107 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jv7l8"] Jan 29 17:11:51 crc kubenswrapper[4746]: I0129 17:11:51.271067 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jv7l8" event={"ID":"32b9d421-df3e-41ae-aed8-a90e40f19fff","Type":"ContainerStarted","Data":"b4f9c499613e1f7be206a263a55ea1f6a600c62cc22b3f63168e04f87029ea11"} Jan 29 17:11:52 crc kubenswrapper[4746]: I0129 17:11:52.296697 4746 generic.go:334] "Generic (PLEG): container finished" podID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerID="b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557" exitCode=0 Jan 29 17:11:52 crc kubenswrapper[4746]: I0129 17:11:52.297035 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jv7l8" event={"ID":"32b9d421-df3e-41ae-aed8-a90e40f19fff","Type":"ContainerDied","Data":"b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557"} Jan 29 17:11:53 crc kubenswrapper[4746]: I0129 17:11:53.308605 4746 generic.go:334] "Generic (PLEG): container finished" podID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerID="1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db" exitCode=0 Jan 29 17:11:53 crc kubenswrapper[4746]: I0129 17:11:53.308671 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jv7l8" event={"ID":"32b9d421-df3e-41ae-aed8-a90e40f19fff","Type":"ContainerDied","Data":"1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db"} Jan 29 17:11:53 crc kubenswrapper[4746]: I0129 17:11:53.446457 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:11:53 crc kubenswrapper[4746]: E0129 17:11:53.446921 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:11:54 crc kubenswrapper[4746]: I0129 17:11:54.323959 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jv7l8" event={"ID":"32b9d421-df3e-41ae-aed8-a90e40f19fff","Type":"ContainerStarted","Data":"cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805"} Jan 29 17:11:54 crc kubenswrapper[4746]: I0129 17:11:54.349372 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jv7l8" podStartSLOduration=2.960098398 podStartE2EDuration="4.349345515s" podCreationTimestamp="2026-01-29 17:11:50 +0000 UTC" firstStartedPulling="2026-01-29 17:11:52.299532325 +0000 UTC m=+2234.700116969" lastFinishedPulling="2026-01-29 17:11:53.688779442 +0000 UTC m=+2236.089364086" observedRunningTime="2026-01-29 17:11:54.343789906 +0000 UTC m=+2236.744374560" watchObservedRunningTime="2026-01-29 17:11:54.349345515 +0000 UTC m=+2236.749930169" Jan 29 17:12:00 crc kubenswrapper[4746]: I0129 17:12:00.858791 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:12:00 crc kubenswrapper[4746]: I0129 17:12:00.859343 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:12:00 crc kubenswrapper[4746]: I0129 17:12:00.899490 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:12:01 crc kubenswrapper[4746]: I0129 17:12:01.426508 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:12:01 crc kubenswrapper[4746]: I0129 17:12:01.492445 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jv7l8"] Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.384635 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jv7l8" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="registry-server" containerID="cri-o://cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805" gracePeriod=2 Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.775026 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.827404 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-utilities\") pod \"32b9d421-df3e-41ae-aed8-a90e40f19fff\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.827474 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks4qz\" (UniqueName: \"kubernetes.io/projected/32b9d421-df3e-41ae-aed8-a90e40f19fff-kube-api-access-ks4qz\") pod \"32b9d421-df3e-41ae-aed8-a90e40f19fff\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.827527 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-catalog-content\") pod \"32b9d421-df3e-41ae-aed8-a90e40f19fff\" (UID: \"32b9d421-df3e-41ae-aed8-a90e40f19fff\") " Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.828449 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-utilities" (OuterVolumeSpecName: "utilities") pod "32b9d421-df3e-41ae-aed8-a90e40f19fff" (UID: "32b9d421-df3e-41ae-aed8-a90e40f19fff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.835901 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b9d421-df3e-41ae-aed8-a90e40f19fff-kube-api-access-ks4qz" (OuterVolumeSpecName: "kube-api-access-ks4qz") pod "32b9d421-df3e-41ae-aed8-a90e40f19fff" (UID: "32b9d421-df3e-41ae-aed8-a90e40f19fff"). InnerVolumeSpecName "kube-api-access-ks4qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.851979 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32b9d421-df3e-41ae-aed8-a90e40f19fff" (UID: "32b9d421-df3e-41ae-aed8-a90e40f19fff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.928780 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.928828 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b9d421-df3e-41ae-aed8-a90e40f19fff-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:12:03 crc kubenswrapper[4746]: I0129 17:12:03.928841 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks4qz\" (UniqueName: \"kubernetes.io/projected/32b9d421-df3e-41ae-aed8-a90e40f19fff-kube-api-access-ks4qz\") on node \"crc\" DevicePath \"\"" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.394601 4746 generic.go:334] "Generic (PLEG): container finished" podID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerID="cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805" exitCode=0 Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.394647 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jv7l8" event={"ID":"32b9d421-df3e-41ae-aed8-a90e40f19fff","Type":"ContainerDied","Data":"cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805"} Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.394663 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jv7l8" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.394681 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jv7l8" event={"ID":"32b9d421-df3e-41ae-aed8-a90e40f19fff","Type":"ContainerDied","Data":"b4f9c499613e1f7be206a263a55ea1f6a600c62cc22b3f63168e04f87029ea11"} Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.394702 4746 scope.go:117] "RemoveContainer" containerID="cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.412876 4746 scope.go:117] "RemoveContainer" containerID="1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.427151 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jv7l8"] Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.432230 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jv7l8"] Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.440199 4746 scope.go:117] "RemoveContainer" containerID="b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.457037 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" path="/var/lib/kubelet/pods/32b9d421-df3e-41ae-aed8-a90e40f19fff/volumes" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.458263 4746 scope.go:117] "RemoveContainer" containerID="cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805" Jan 29 17:12:04 crc kubenswrapper[4746]: E0129 17:12:04.458744 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805\": container with ID starting with cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805 not found: ID does not exist" containerID="cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.458784 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805"} err="failed to get container status \"cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805\": rpc error: code = NotFound desc = could not find container \"cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805\": container with ID starting with cf8f381ec697d486d8c7ac9ed6549337330a9f8b52af558240717bbb40527805 not found: ID does not exist" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.458815 4746 scope.go:117] "RemoveContainer" containerID="1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db" Jan 29 17:12:04 crc kubenswrapper[4746]: E0129 17:12:04.459213 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db\": container with ID starting with 1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db not found: ID does not exist" containerID="1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.459230 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db"} err="failed to get container status \"1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db\": rpc error: code = NotFound desc = could not find container \"1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db\": container with ID starting with 1246aeb2edb8cd396e848b99763c3a8c26c8d19633b04b0cd0835ac57ecc82db not found: ID does not exist" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.459242 4746 scope.go:117] "RemoveContainer" containerID="b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557" Jan 29 17:12:04 crc kubenswrapper[4746]: E0129 17:12:04.459465 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557\": container with ID starting with b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557 not found: ID does not exist" containerID="b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557" Jan 29 17:12:04 crc kubenswrapper[4746]: I0129 17:12:04.459483 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557"} err="failed to get container status \"b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557\": rpc error: code = NotFound desc = could not find container \"b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557\": container with ID starting with b0513923fa10ad6b5ee6799c7da2d6e28f5a6aa1c3e032a7694deeb0465cb557 not found: ID does not exist" Jan 29 17:12:07 crc kubenswrapper[4746]: I0129 17:12:07.445775 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:12:07 crc kubenswrapper[4746]: E0129 17:12:07.446295 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:12:21 crc kubenswrapper[4746]: I0129 17:12:21.445959 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:12:21 crc kubenswrapper[4746]: E0129 17:12:21.446723 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:12:36 crc kubenswrapper[4746]: I0129 17:12:36.445451 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:12:36 crc kubenswrapper[4746]: E0129 17:12:36.446402 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:12:48 crc kubenswrapper[4746]: I0129 17:12:48.450463 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:12:48 crc kubenswrapper[4746]: E0129 17:12:48.451435 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.447121 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:13:00 crc kubenswrapper[4746]: E0129 17:13:00.449482 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.622576 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vjs6z"] Jan 29 17:13:00 crc kubenswrapper[4746]: E0129 17:13:00.622919 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="extract-utilities" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.622936 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="extract-utilities" Jan 29 17:13:00 crc kubenswrapper[4746]: E0129 17:13:00.622962 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="extract-content" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.622971 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="extract-content" Jan 29 17:13:00 crc kubenswrapper[4746]: E0129 17:13:00.623000 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="registry-server" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.623009 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="registry-server" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.623204 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b9d421-df3e-41ae-aed8-a90e40f19fff" containerName="registry-server" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.624443 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.639562 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjs6z"] Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.797290 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-catalog-content\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.797372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-utilities\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.797433 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tx8\" (UniqueName: \"kubernetes.io/projected/f8833295-3be8-4487-9819-d27f3f0fa718-kube-api-access-k8tx8\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.899071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-utilities\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.899125 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tx8\" (UniqueName: \"kubernetes.io/projected/f8833295-3be8-4487-9819-d27f3f0fa718-kube-api-access-k8tx8\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.899233 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-catalog-content\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.899807 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-utilities\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.899841 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-catalog-content\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.922123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tx8\" (UniqueName: \"kubernetes.io/projected/f8833295-3be8-4487-9819-d27f3f0fa718-kube-api-access-k8tx8\") pod \"community-operators-vjs6z\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:00 crc kubenswrapper[4746]: I0129 17:13:00.945790 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:01 crc kubenswrapper[4746]: I0129 17:13:01.217383 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjs6z"] Jan 29 17:13:01 crc kubenswrapper[4746]: I0129 17:13:01.785878 4746 generic.go:334] "Generic (PLEG): container finished" podID="f8833295-3be8-4487-9819-d27f3f0fa718" containerID="e42674681ffaeca3016d5c21e48892c8c8db031f19340b03d08236b1fcced84a" exitCode=0 Jan 29 17:13:01 crc kubenswrapper[4746]: I0129 17:13:01.785932 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerDied","Data":"e42674681ffaeca3016d5c21e48892c8c8db031f19340b03d08236b1fcced84a"} Jan 29 17:13:01 crc kubenswrapper[4746]: I0129 17:13:01.785978 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerStarted","Data":"877356aee9b1af189cb71d9c60abe939c5217b0aed1cc51c955f184622f79ef6"} Jan 29 17:13:01 crc kubenswrapper[4746]: I0129 17:13:01.787522 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:13:02 crc kubenswrapper[4746]: I0129 17:13:02.794854 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerStarted","Data":"fb69b5b295f1245fc7833f5678715c7c1e93e8ab764bbd2734f9ff3d4e84be2a"} Jan 29 17:13:03 crc kubenswrapper[4746]: I0129 17:13:03.802489 4746 generic.go:334] "Generic (PLEG): container finished" podID="f8833295-3be8-4487-9819-d27f3f0fa718" containerID="fb69b5b295f1245fc7833f5678715c7c1e93e8ab764bbd2734f9ff3d4e84be2a" exitCode=0 Jan 29 17:13:03 crc kubenswrapper[4746]: I0129 17:13:03.802535 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerDied","Data":"fb69b5b295f1245fc7833f5678715c7c1e93e8ab764bbd2734f9ff3d4e84be2a"} Jan 29 17:13:05 crc kubenswrapper[4746]: I0129 17:13:05.819899 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerStarted","Data":"63ebed3c16e77edafbf80ee1816c5dbb8799fd2166fdfd5550ef02a83b02cf2b"} Jan 29 17:13:05 crc kubenswrapper[4746]: I0129 17:13:05.845891 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vjs6z" podStartSLOduration=2.771992415 podStartE2EDuration="5.845849875s" podCreationTimestamp="2026-01-29 17:13:00 +0000 UTC" firstStartedPulling="2026-01-29 17:13:01.787180213 +0000 UTC m=+2304.187764857" lastFinishedPulling="2026-01-29 17:13:04.861037673 +0000 UTC m=+2307.261622317" observedRunningTime="2026-01-29 17:13:05.844650233 +0000 UTC m=+2308.245234877" watchObservedRunningTime="2026-01-29 17:13:05.845849875 +0000 UTC m=+2308.246434519" Jan 29 17:13:10 crc kubenswrapper[4746]: I0129 17:13:10.946229 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:10 crc kubenswrapper[4746]: I0129 17:13:10.946618 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:10 crc kubenswrapper[4746]: I0129 17:13:10.984433 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:11 crc kubenswrapper[4746]: I0129 17:13:11.897234 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:11 crc kubenswrapper[4746]: I0129 17:13:11.950535 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjs6z"] Jan 29 17:13:13 crc kubenswrapper[4746]: I0129 17:13:13.445564 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:13:13 crc kubenswrapper[4746]: E0129 17:13:13.445790 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:13:13 crc kubenswrapper[4746]: I0129 17:13:13.866623 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vjs6z" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="registry-server" containerID="cri-o://63ebed3c16e77edafbf80ee1816c5dbb8799fd2166fdfd5550ef02a83b02cf2b" gracePeriod=2 Jan 29 17:13:14 crc kubenswrapper[4746]: I0129 17:13:14.874106 4746 generic.go:334] "Generic (PLEG): container finished" podID="f8833295-3be8-4487-9819-d27f3f0fa718" containerID="63ebed3c16e77edafbf80ee1816c5dbb8799fd2166fdfd5550ef02a83b02cf2b" exitCode=0 Jan 29 17:13:14 crc kubenswrapper[4746]: I0129 17:13:14.874216 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerDied","Data":"63ebed3c16e77edafbf80ee1816c5dbb8799fd2166fdfd5550ef02a83b02cf2b"} Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.341245 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.500096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-utilities\") pod \"f8833295-3be8-4487-9819-d27f3f0fa718\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.500253 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-catalog-content\") pod \"f8833295-3be8-4487-9819-d27f3f0fa718\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.500308 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8tx8\" (UniqueName: \"kubernetes.io/projected/f8833295-3be8-4487-9819-d27f3f0fa718-kube-api-access-k8tx8\") pod \"f8833295-3be8-4487-9819-d27f3f0fa718\" (UID: \"f8833295-3be8-4487-9819-d27f3f0fa718\") " Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.501034 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-utilities" (OuterVolumeSpecName: "utilities") pod "f8833295-3be8-4487-9819-d27f3f0fa718" (UID: "f8833295-3be8-4487-9819-d27f3f0fa718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.506453 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8833295-3be8-4487-9819-d27f3f0fa718-kube-api-access-k8tx8" (OuterVolumeSpecName: "kube-api-access-k8tx8") pod "f8833295-3be8-4487-9819-d27f3f0fa718" (UID: "f8833295-3be8-4487-9819-d27f3f0fa718"). InnerVolumeSpecName "kube-api-access-k8tx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.555958 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8833295-3be8-4487-9819-d27f3f0fa718" (UID: "f8833295-3be8-4487-9819-d27f3f0fa718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.602399 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.602581 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8833295-3be8-4487-9819-d27f3f0fa718-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.602603 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8tx8\" (UniqueName: \"kubernetes.io/projected/f8833295-3be8-4487-9819-d27f3f0fa718-kube-api-access-k8tx8\") on node \"crc\" DevicePath \"\"" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.886236 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjs6z" event={"ID":"f8833295-3be8-4487-9819-d27f3f0fa718","Type":"ContainerDied","Data":"877356aee9b1af189cb71d9c60abe939c5217b0aed1cc51c955f184622f79ef6"} Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.886302 4746 scope.go:117] "RemoveContainer" containerID="63ebed3c16e77edafbf80ee1816c5dbb8799fd2166fdfd5550ef02a83b02cf2b" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.886355 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjs6z" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.907867 4746 scope.go:117] "RemoveContainer" containerID="fb69b5b295f1245fc7833f5678715c7c1e93e8ab764bbd2734f9ff3d4e84be2a" Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.922754 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjs6z"] Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.932789 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vjs6z"] Jan 29 17:13:15 crc kubenswrapper[4746]: I0129 17:13:15.940894 4746 scope.go:117] "RemoveContainer" containerID="e42674681ffaeca3016d5c21e48892c8c8db031f19340b03d08236b1fcced84a" Jan 29 17:13:16 crc kubenswrapper[4746]: I0129 17:13:16.453173 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" path="/var/lib/kubelet/pods/f8833295-3be8-4487-9819-d27f3f0fa718/volumes" Jan 29 17:13:26 crc kubenswrapper[4746]: I0129 17:13:26.446130 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:13:26 crc kubenswrapper[4746]: E0129 17:13:26.447136 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:13:37 crc kubenswrapper[4746]: I0129 17:13:37.445329 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:13:37 crc kubenswrapper[4746]: E0129 17:13:37.446151 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:13:48 crc kubenswrapper[4746]: I0129 17:13:48.449521 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:13:48 crc kubenswrapper[4746]: E0129 17:13:48.450234 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:13:59 crc kubenswrapper[4746]: I0129 17:13:59.446241 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:13:59 crc kubenswrapper[4746]: E0129 17:13:59.447332 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:14:12 crc kubenswrapper[4746]: I0129 17:14:12.446052 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:14:12 crc kubenswrapper[4746]: E0129 17:14:12.446772 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:14:23 crc kubenswrapper[4746]: I0129 17:14:23.445536 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:14:23 crc kubenswrapper[4746]: E0129 17:14:23.446232 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:14:36 crc kubenswrapper[4746]: I0129 17:14:36.446064 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:14:36 crc kubenswrapper[4746]: E0129 17:14:36.446775 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:14:48 crc kubenswrapper[4746]: I0129 17:14:48.450237 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:14:48 crc kubenswrapper[4746]: E0129 17:14:48.450995 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.165126 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852"] Jan 29 17:15:00 crc kubenswrapper[4746]: E0129 17:15:00.165921 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="extract-content" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.165933 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="extract-content" Jan 29 17:15:00 crc kubenswrapper[4746]: E0129 17:15:00.165942 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="registry-server" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.165948 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="registry-server" Jan 29 17:15:00 crc kubenswrapper[4746]: E0129 17:15:00.165963 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="extract-utilities" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.165969 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="extract-utilities" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.166089 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8833295-3be8-4487-9819-d27f3f0fa718" containerName="registry-server" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.166536 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.169649 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.170505 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.187445 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852"] Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.271422 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5ada394-4bb4-48d3-9480-825a99d0059d-secret-volume\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.271475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dncjf\" (UniqueName: \"kubernetes.io/projected/d5ada394-4bb4-48d3-9480-825a99d0059d-kube-api-access-dncjf\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.271571 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5ada394-4bb4-48d3-9480-825a99d0059d-config-volume\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.373377 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5ada394-4bb4-48d3-9480-825a99d0059d-secret-volume\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.373432 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dncjf\" (UniqueName: \"kubernetes.io/projected/d5ada394-4bb4-48d3-9480-825a99d0059d-kube-api-access-dncjf\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.373499 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5ada394-4bb4-48d3-9480-825a99d0059d-config-volume\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.374450 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5ada394-4bb4-48d3-9480-825a99d0059d-config-volume\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.380165 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5ada394-4bb4-48d3-9480-825a99d0059d-secret-volume\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.390643 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dncjf\" (UniqueName: \"kubernetes.io/projected/d5ada394-4bb4-48d3-9480-825a99d0059d-kube-api-access-dncjf\") pod \"collect-profiles-29495115-c4852\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.488291 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:00 crc kubenswrapper[4746]: I0129 17:15:00.909833 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852"] Jan 29 17:15:01 crc kubenswrapper[4746]: I0129 17:15:01.446309 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:15:01 crc kubenswrapper[4746]: E0129 17:15:01.447015 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:15:01 crc kubenswrapper[4746]: I0129 17:15:01.628806 4746 generic.go:334] "Generic (PLEG): container finished" podID="d5ada394-4bb4-48d3-9480-825a99d0059d" containerID="9e4eddf82a3df25f2b8ae75eae10b86f3bc73ae64e76a923ec5eb8d5627d440a" exitCode=0 Jan 29 17:15:01 crc kubenswrapper[4746]: I0129 17:15:01.628856 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" event={"ID":"d5ada394-4bb4-48d3-9480-825a99d0059d","Type":"ContainerDied","Data":"9e4eddf82a3df25f2b8ae75eae10b86f3bc73ae64e76a923ec5eb8d5627d440a"} Jan 29 17:15:01 crc kubenswrapper[4746]: I0129 17:15:01.628886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" event={"ID":"d5ada394-4bb4-48d3-9480-825a99d0059d","Type":"ContainerStarted","Data":"acd88e151ce9b8d383d1dbfb23f4bdc876b88fc3142bc1f88027cd2406dbd8de"} Jan 29 17:15:02 crc kubenswrapper[4746]: I0129 17:15:02.882022 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.010848 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dncjf\" (UniqueName: \"kubernetes.io/projected/d5ada394-4bb4-48d3-9480-825a99d0059d-kube-api-access-dncjf\") pod \"d5ada394-4bb4-48d3-9480-825a99d0059d\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.011109 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5ada394-4bb4-48d3-9480-825a99d0059d-secret-volume\") pod \"d5ada394-4bb4-48d3-9480-825a99d0059d\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.011268 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5ada394-4bb4-48d3-9480-825a99d0059d-config-volume\") pod \"d5ada394-4bb4-48d3-9480-825a99d0059d\" (UID: \"d5ada394-4bb4-48d3-9480-825a99d0059d\") " Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.011992 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ada394-4bb4-48d3-9480-825a99d0059d-config-volume" (OuterVolumeSpecName: "config-volume") pod "d5ada394-4bb4-48d3-9480-825a99d0059d" (UID: "d5ada394-4bb4-48d3-9480-825a99d0059d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.015722 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ada394-4bb4-48d3-9480-825a99d0059d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d5ada394-4bb4-48d3-9480-825a99d0059d" (UID: "d5ada394-4bb4-48d3-9480-825a99d0059d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.015963 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ada394-4bb4-48d3-9480-825a99d0059d-kube-api-access-dncjf" (OuterVolumeSpecName: "kube-api-access-dncjf") pod "d5ada394-4bb4-48d3-9480-825a99d0059d" (UID: "d5ada394-4bb4-48d3-9480-825a99d0059d"). InnerVolumeSpecName "kube-api-access-dncjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.112527 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5ada394-4bb4-48d3-9480-825a99d0059d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.112566 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dncjf\" (UniqueName: \"kubernetes.io/projected/d5ada394-4bb4-48d3-9480-825a99d0059d-kube-api-access-dncjf\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.112578 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d5ada394-4bb4-48d3-9480-825a99d0059d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.642979 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" event={"ID":"d5ada394-4bb4-48d3-9480-825a99d0059d","Type":"ContainerDied","Data":"acd88e151ce9b8d383d1dbfb23f4bdc876b88fc3142bc1f88027cd2406dbd8de"} Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.643023 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd88e151ce9b8d383d1dbfb23f4bdc876b88fc3142bc1f88027cd2406dbd8de" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.643058 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-c4852" Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.955235 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n"] Jan 29 17:15:03 crc kubenswrapper[4746]: I0129 17:15:03.961850 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-4t48n"] Jan 29 17:15:04 crc kubenswrapper[4746]: I0129 17:15:04.459370 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7becf4a7-7ad1-4d20-9707-a28330253dfd" path="/var/lib/kubelet/pods/7becf4a7-7ad1-4d20-9707-a28330253dfd/volumes" Jan 29 17:15:15 crc kubenswrapper[4746]: I0129 17:15:15.445672 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:15:15 crc kubenswrapper[4746]: E0129 17:15:15.446743 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:15:27 crc kubenswrapper[4746]: I0129 17:15:27.446424 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:15:27 crc kubenswrapper[4746]: E0129 17:15:27.447581 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:15:40 crc kubenswrapper[4746]: I0129 17:15:40.446103 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:15:40 crc kubenswrapper[4746]: E0129 17:15:40.446798 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:15:42 crc kubenswrapper[4746]: I0129 17:15:42.497112 4746 scope.go:117] "RemoveContainer" containerID="56945b9f9905328c80010a14bdf4394e3457f2b49f68f700d7bdb410ad10b2e1" Jan 29 17:15:51 crc kubenswrapper[4746]: I0129 17:15:51.446064 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:15:51 crc kubenswrapper[4746]: E0129 17:15:51.447008 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:16:06 crc kubenswrapper[4746]: I0129 17:16:06.446369 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:16:06 crc kubenswrapper[4746]: E0129 17:16:06.447120 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:16:19 crc kubenswrapper[4746]: I0129 17:16:19.445158 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:16:20 crc kubenswrapper[4746]: I0129 17:16:20.167499 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"c375384f7c9db8d3b769022dc7ec6b2705dc882fc3948563f2a505700413de4f"} Jan 29 17:18:19 crc kubenswrapper[4746]: I0129 17:18:19.065251 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:18:19 crc kubenswrapper[4746]: I0129 17:18:19.065797 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:18:49 crc kubenswrapper[4746]: I0129 17:18:49.065104 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:18:49 crc kubenswrapper[4746]: I0129 17:18:49.065812 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.064765 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.065256 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.065309 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.066002 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c375384f7c9db8d3b769022dc7ec6b2705dc882fc3948563f2a505700413de4f"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.066104 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://c375384f7c9db8d3b769022dc7ec6b2705dc882fc3948563f2a505700413de4f" gracePeriod=600 Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.443857 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="c375384f7c9db8d3b769022dc7ec6b2705dc882fc3948563f2a505700413de4f" exitCode=0 Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.443950 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"c375384f7c9db8d3b769022dc7ec6b2705dc882fc3948563f2a505700413de4f"} Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.444471 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594"} Jan 29 17:19:19 crc kubenswrapper[4746]: I0129 17:19:19.444615 4746 scope.go:117] "RemoveContainer" containerID="2781bc6d4a1e9384f775b72c811949645f88d0da74fb8a41c6d341a3280fb4f7" Jan 29 17:21:18 crc kubenswrapper[4746]: I0129 17:21:18.881959 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zngsx"] Jan 29 17:21:18 crc kubenswrapper[4746]: E0129 17:21:18.883224 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ada394-4bb4-48d3-9480-825a99d0059d" containerName="collect-profiles" Jan 29 17:21:18 crc kubenswrapper[4746]: I0129 17:21:18.883241 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ada394-4bb4-48d3-9480-825a99d0059d" containerName="collect-profiles" Jan 29 17:21:18 crc kubenswrapper[4746]: I0129 17:21:18.883409 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ada394-4bb4-48d3-9480-825a99d0059d" containerName="collect-profiles" Jan 29 17:21:18 crc kubenswrapper[4746]: I0129 17:21:18.884659 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:18 crc kubenswrapper[4746]: I0129 17:21:18.898559 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zngsx"] Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.057940 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-catalog-content\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.058031 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-utilities\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.058073 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rqk\" (UniqueName: \"kubernetes.io/projected/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-kube-api-access-m4rqk\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.064712 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.064787 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.158933 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-utilities\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.159010 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4rqk\" (UniqueName: \"kubernetes.io/projected/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-kube-api-access-m4rqk\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.159067 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-catalog-content\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.159504 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-catalog-content\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.159721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-utilities\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.182463 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4rqk\" (UniqueName: \"kubernetes.io/projected/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-kube-api-access-m4rqk\") pod \"certified-operators-zngsx\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.206903 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:19 crc kubenswrapper[4746]: I0129 17:21:19.662975 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zngsx"] Jan 29 17:21:20 crc kubenswrapper[4746]: I0129 17:21:20.294352 4746 generic.go:334] "Generic (PLEG): container finished" podID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerID="8554c9322eeff7f2e6a1b234646b895167a08d6432959c669c6781977a6aeca1" exitCode=0 Jan 29 17:21:20 crc kubenswrapper[4746]: I0129 17:21:20.294446 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngsx" event={"ID":"e7d4e430-42df-4cca-b7a8-2df49d8e3c56","Type":"ContainerDied","Data":"8554c9322eeff7f2e6a1b234646b895167a08d6432959c669c6781977a6aeca1"} Jan 29 17:21:20 crc kubenswrapper[4746]: I0129 17:21:20.295807 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngsx" event={"ID":"e7d4e430-42df-4cca-b7a8-2df49d8e3c56","Type":"ContainerStarted","Data":"30ee4f89b171212174cc0fbcc4a56d5755d808461e613b39cb923a23a351b919"} Jan 29 17:21:20 crc kubenswrapper[4746]: I0129 17:21:20.297021 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.304439 4746 generic.go:334] "Generic (PLEG): container finished" podID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerID="0f02dff4ebf65084c1abd370ca8c06f65ccbb899d5cb3e95e46428c130007c86" exitCode=0 Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.304621 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngsx" event={"ID":"e7d4e430-42df-4cca-b7a8-2df49d8e3c56","Type":"ContainerDied","Data":"0f02dff4ebf65084c1abd370ca8c06f65ccbb899d5cb3e95e46428c130007c86"} Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.875118 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmztx"] Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.876370 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.890506 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmztx"] Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.995425 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-utilities\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.995656 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-catalog-content\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:21 crc kubenswrapper[4746]: I0129 17:21:21.995716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7cr\" (UniqueName: \"kubernetes.io/projected/fb1d1c36-9919-46a0-a989-dbc7149676b3-kube-api-access-kd7cr\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.096715 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-catalog-content\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.096771 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7cr\" (UniqueName: \"kubernetes.io/projected/fb1d1c36-9919-46a0-a989-dbc7149676b3-kube-api-access-kd7cr\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.096838 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-utilities\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.097501 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-catalog-content\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.097611 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-utilities\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.129724 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7cr\" (UniqueName: \"kubernetes.io/projected/fb1d1c36-9919-46a0-a989-dbc7149676b3-kube-api-access-kd7cr\") pod \"redhat-operators-vmztx\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.192037 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.322838 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngsx" event={"ID":"e7d4e430-42df-4cca-b7a8-2df49d8e3c56","Type":"ContainerStarted","Data":"b2c3b0749fe753a5c089bc735581dadf6a32822c1f950ca803f6650ecb779aa4"} Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.354398 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zngsx" podStartSLOduration=2.895545619 podStartE2EDuration="4.354376553s" podCreationTimestamp="2026-01-29 17:21:18 +0000 UTC" firstStartedPulling="2026-01-29 17:21:20.29673627 +0000 UTC m=+2802.697320914" lastFinishedPulling="2026-01-29 17:21:21.755567164 +0000 UTC m=+2804.156151848" observedRunningTime="2026-01-29 17:21:22.344433807 +0000 UTC m=+2804.745018451" watchObservedRunningTime="2026-01-29 17:21:22.354376553 +0000 UTC m=+2804.754961197" Jan 29 17:21:22 crc kubenswrapper[4746]: I0129 17:21:22.469077 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmztx"] Jan 29 17:21:22 crc kubenswrapper[4746]: W0129 17:21:22.473953 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb1d1c36_9919_46a0_a989_dbc7149676b3.slice/crio-686eb0996a4b5dd23749d0c6305e946d3c03764815ebee990ce210c9a0e716a0 WatchSource:0}: Error finding container 686eb0996a4b5dd23749d0c6305e946d3c03764815ebee990ce210c9a0e716a0: Status 404 returned error can't find the container with id 686eb0996a4b5dd23749d0c6305e946d3c03764815ebee990ce210c9a0e716a0 Jan 29 17:21:23 crc kubenswrapper[4746]: I0129 17:21:23.330088 4746 generic.go:334] "Generic (PLEG): container finished" podID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerID="4043339a4d0a6e510085759290cc874a262a40c4660812f4b32b1ca8c5bc8e7d" exitCode=0 Jan 29 17:21:23 crc kubenswrapper[4746]: I0129 17:21:23.330170 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerDied","Data":"4043339a4d0a6e510085759290cc874a262a40c4660812f4b32b1ca8c5bc8e7d"} Jan 29 17:21:23 crc kubenswrapper[4746]: I0129 17:21:23.330264 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerStarted","Data":"686eb0996a4b5dd23749d0c6305e946d3c03764815ebee990ce210c9a0e716a0"} Jan 29 17:21:24 crc kubenswrapper[4746]: I0129 17:21:24.338735 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerStarted","Data":"47851a234508856e5e539327c9d6aa8373c3c015fa673925a36a14656dc1b45c"} Jan 29 17:21:25 crc kubenswrapper[4746]: I0129 17:21:25.350983 4746 generic.go:334] "Generic (PLEG): container finished" podID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerID="47851a234508856e5e539327c9d6aa8373c3c015fa673925a36a14656dc1b45c" exitCode=0 Jan 29 17:21:25 crc kubenswrapper[4746]: I0129 17:21:25.351054 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerDied","Data":"47851a234508856e5e539327c9d6aa8373c3c015fa673925a36a14656dc1b45c"} Jan 29 17:21:26 crc kubenswrapper[4746]: I0129 17:21:26.361198 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerStarted","Data":"a1a069ec030af415ee6a9dfe568f77b232854e4f39c21185f248566edd29629b"} Jan 29 17:21:26 crc kubenswrapper[4746]: I0129 17:21:26.379059 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmztx" podStartSLOduration=2.918112591 podStartE2EDuration="5.379039221s" podCreationTimestamp="2026-01-29 17:21:21 +0000 UTC" firstStartedPulling="2026-01-29 17:21:23.33192136 +0000 UTC m=+2805.732506004" lastFinishedPulling="2026-01-29 17:21:25.79284799 +0000 UTC m=+2808.193432634" observedRunningTime="2026-01-29 17:21:26.376553484 +0000 UTC m=+2808.777138148" watchObservedRunningTime="2026-01-29 17:21:26.379039221 +0000 UTC m=+2808.779623865" Jan 29 17:21:29 crc kubenswrapper[4746]: I0129 17:21:29.207179 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:29 crc kubenswrapper[4746]: I0129 17:21:29.208357 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:29 crc kubenswrapper[4746]: I0129 17:21:29.256591 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:29 crc kubenswrapper[4746]: I0129 17:21:29.423000 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:32 crc kubenswrapper[4746]: I0129 17:21:32.192759 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:32 crc kubenswrapper[4746]: I0129 17:21:32.192809 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:32 crc kubenswrapper[4746]: I0129 17:21:32.236101 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:32 crc kubenswrapper[4746]: I0129 17:21:32.439565 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.267258 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zngsx"] Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.267495 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zngsx" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="registry-server" containerID="cri-o://b2c3b0749fe753a5c089bc735581dadf6a32822c1f950ca803f6650ecb779aa4" gracePeriod=2 Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.413900 4746 generic.go:334] "Generic (PLEG): container finished" podID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerID="b2c3b0749fe753a5c089bc735581dadf6a32822c1f950ca803f6650ecb779aa4" exitCode=0 Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.413993 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngsx" event={"ID":"e7d4e430-42df-4cca-b7a8-2df49d8e3c56","Type":"ContainerDied","Data":"b2c3b0749fe753a5c089bc735581dadf6a32822c1f950ca803f6650ecb779aa4"} Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.681000 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.859350 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-catalog-content\") pod \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.859677 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-utilities\") pod \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.859794 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4rqk\" (UniqueName: \"kubernetes.io/projected/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-kube-api-access-m4rqk\") pod \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\" (UID: \"e7d4e430-42df-4cca-b7a8-2df49d8e3c56\") " Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.860395 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-utilities" (OuterVolumeSpecName: "utilities") pod "e7d4e430-42df-4cca-b7a8-2df49d8e3c56" (UID: "e7d4e430-42df-4cca-b7a8-2df49d8e3c56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.866022 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-kube-api-access-m4rqk" (OuterVolumeSpecName: "kube-api-access-m4rqk") pod "e7d4e430-42df-4cca-b7a8-2df49d8e3c56" (UID: "e7d4e430-42df-4cca-b7a8-2df49d8e3c56"). InnerVolumeSpecName "kube-api-access-m4rqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.909267 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7d4e430-42df-4cca-b7a8-2df49d8e3c56" (UID: "e7d4e430-42df-4cca-b7a8-2df49d8e3c56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.961525 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.961563 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:33 crc kubenswrapper[4746]: I0129 17:21:33.961575 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4rqk\" (UniqueName: \"kubernetes.io/projected/e7d4e430-42df-4cca-b7a8-2df49d8e3c56-kube-api-access-m4rqk\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.423867 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngsx" event={"ID":"e7d4e430-42df-4cca-b7a8-2df49d8e3c56","Type":"ContainerDied","Data":"30ee4f89b171212174cc0fbcc4a56d5755d808461e613b39cb923a23a351b919"} Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.423924 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngsx" Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.423938 4746 scope.go:117] "RemoveContainer" containerID="b2c3b0749fe753a5c089bc735581dadf6a32822c1f950ca803f6650ecb779aa4" Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.443851 4746 scope.go:117] "RemoveContainer" containerID="0f02dff4ebf65084c1abd370ca8c06f65ccbb899d5cb3e95e46428c130007c86" Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.457576 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zngsx"] Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.463151 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zngsx"] Jan 29 17:21:34 crc kubenswrapper[4746]: I0129 17:21:34.480520 4746 scope.go:117] "RemoveContainer" containerID="8554c9322eeff7f2e6a1b234646b895167a08d6432959c669c6781977a6aeca1" Jan 29 17:21:36 crc kubenswrapper[4746]: I0129 17:21:36.453657 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" path="/var/lib/kubelet/pods/e7d4e430-42df-4cca-b7a8-2df49d8e3c56/volumes" Jan 29 17:21:36 crc kubenswrapper[4746]: I0129 17:21:36.865341 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmztx"] Jan 29 17:21:36 crc kubenswrapper[4746]: I0129 17:21:36.865615 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmztx" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="registry-server" containerID="cri-o://a1a069ec030af415ee6a9dfe568f77b232854e4f39c21185f248566edd29629b" gracePeriod=2 Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.448010 4746 generic.go:334] "Generic (PLEG): container finished" podID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerID="a1a069ec030af415ee6a9dfe568f77b232854e4f39c21185f248566edd29629b" exitCode=0 Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.448081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerDied","Data":"a1a069ec030af415ee6a9dfe568f77b232854e4f39c21185f248566edd29629b"} Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.813980 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.915107 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-catalog-content\") pod \"fb1d1c36-9919-46a0-a989-dbc7149676b3\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.915180 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-utilities\") pod \"fb1d1c36-9919-46a0-a989-dbc7149676b3\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.915223 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd7cr\" (UniqueName: \"kubernetes.io/projected/fb1d1c36-9919-46a0-a989-dbc7149676b3-kube-api-access-kd7cr\") pod \"fb1d1c36-9919-46a0-a989-dbc7149676b3\" (UID: \"fb1d1c36-9919-46a0-a989-dbc7149676b3\") " Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.916011 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-utilities" (OuterVolumeSpecName: "utilities") pod "fb1d1c36-9919-46a0-a989-dbc7149676b3" (UID: "fb1d1c36-9919-46a0-a989-dbc7149676b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:21:37 crc kubenswrapper[4746]: I0129 17:21:37.919820 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1d1c36-9919-46a0-a989-dbc7149676b3-kube-api-access-kd7cr" (OuterVolumeSpecName: "kube-api-access-kd7cr") pod "fb1d1c36-9919-46a0-a989-dbc7149676b3" (UID: "fb1d1c36-9919-46a0-a989-dbc7149676b3"). InnerVolumeSpecName "kube-api-access-kd7cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.016335 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.016371 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd7cr\" (UniqueName: \"kubernetes.io/projected/fb1d1c36-9919-46a0-a989-dbc7149676b3-kube-api-access-kd7cr\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.036832 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb1d1c36-9919-46a0-a989-dbc7149676b3" (UID: "fb1d1c36-9919-46a0-a989-dbc7149676b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.117932 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1d1c36-9919-46a0-a989-dbc7149676b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.456444 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmztx" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.456722 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmztx" event={"ID":"fb1d1c36-9919-46a0-a989-dbc7149676b3","Type":"ContainerDied","Data":"686eb0996a4b5dd23749d0c6305e946d3c03764815ebee990ce210c9a0e716a0"} Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.457082 4746 scope.go:117] "RemoveContainer" containerID="a1a069ec030af415ee6a9dfe568f77b232854e4f39c21185f248566edd29629b" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.480611 4746 scope.go:117] "RemoveContainer" containerID="47851a234508856e5e539327c9d6aa8373c3c015fa673925a36a14656dc1b45c" Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.491860 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmztx"] Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.499442 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmztx"] Jan 29 17:21:38 crc kubenswrapper[4746]: I0129 17:21:38.508477 4746 scope.go:117] "RemoveContainer" containerID="4043339a4d0a6e510085759290cc874a262a40c4660812f4b32b1ca8c5bc8e7d" Jan 29 17:21:40 crc kubenswrapper[4746]: I0129 17:21:40.458565 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" path="/var/lib/kubelet/pods/fb1d1c36-9919-46a0-a989-dbc7149676b3/volumes" Jan 29 17:21:49 crc kubenswrapper[4746]: I0129 17:21:49.065380 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:21:49 crc kubenswrapper[4746]: I0129 17:21:49.065971 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.065692 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.066244 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.066292 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.066719 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.066766 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" gracePeriod=600 Jan 29 17:22:19 crc kubenswrapper[4746]: E0129 17:22:19.190746 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.759445 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" exitCode=0 Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.759488 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594"} Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.759526 4746 scope.go:117] "RemoveContainer" containerID="c375384f7c9db8d3b769022dc7ec6b2705dc882fc3948563f2a505700413de4f" Jan 29 17:22:19 crc kubenswrapper[4746]: I0129 17:22:19.760077 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:22:19 crc kubenswrapper[4746]: E0129 17:22:19.760360 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.511315 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p2255"] Jan 29 17:22:26 crc kubenswrapper[4746]: E0129 17:22:26.512061 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="extract-content" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512074 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="extract-content" Jan 29 17:22:26 crc kubenswrapper[4746]: E0129 17:22:26.512086 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="extract-utilities" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512093 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="extract-utilities" Jan 29 17:22:26 crc kubenswrapper[4746]: E0129 17:22:26.512103 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="registry-server" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512109 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="registry-server" Jan 29 17:22:26 crc kubenswrapper[4746]: E0129 17:22:26.512119 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="extract-utilities" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512125 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="extract-utilities" Jan 29 17:22:26 crc kubenswrapper[4746]: E0129 17:22:26.512136 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="registry-server" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512141 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="registry-server" Jan 29 17:22:26 crc kubenswrapper[4746]: E0129 17:22:26.512148 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="extract-content" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512154 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="extract-content" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512307 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d4e430-42df-4cca-b7a8-2df49d8e3c56" containerName="registry-server" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.512317 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1d1c36-9919-46a0-a989-dbc7149676b3" containerName="registry-server" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.513258 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.523511 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2255"] Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.669120 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-catalog-content\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.669215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9hw\" (UniqueName: \"kubernetes.io/projected/5b9029c4-3e1f-476d-8ede-6c56413b5f23-kube-api-access-fx9hw\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.669260 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-utilities\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.770599 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-catalog-content\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.770648 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx9hw\" (UniqueName: \"kubernetes.io/projected/5b9029c4-3e1f-476d-8ede-6c56413b5f23-kube-api-access-fx9hw\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.770671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-utilities\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.771241 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-utilities\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.771268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-catalog-content\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.789208 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx9hw\" (UniqueName: \"kubernetes.io/projected/5b9029c4-3e1f-476d-8ede-6c56413b5f23-kube-api-access-fx9hw\") pod \"redhat-marketplace-p2255\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:26 crc kubenswrapper[4746]: I0129 17:22:26.845441 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:27 crc kubenswrapper[4746]: I0129 17:22:27.261251 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2255"] Jan 29 17:22:27 crc kubenswrapper[4746]: I0129 17:22:27.819145 4746 generic.go:334] "Generic (PLEG): container finished" podID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerID="d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a" exitCode=0 Jan 29 17:22:27 crc kubenswrapper[4746]: I0129 17:22:27.819257 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2255" event={"ID":"5b9029c4-3e1f-476d-8ede-6c56413b5f23","Type":"ContainerDied","Data":"d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a"} Jan 29 17:22:27 crc kubenswrapper[4746]: I0129 17:22:27.819724 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2255" event={"ID":"5b9029c4-3e1f-476d-8ede-6c56413b5f23","Type":"ContainerStarted","Data":"af24b6f42dcca05eaca0e7983c6103a4f5c9f951f084eb98983150019b0e0396"} Jan 29 17:22:29 crc kubenswrapper[4746]: I0129 17:22:29.834559 4746 generic.go:334] "Generic (PLEG): container finished" podID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerID="43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410" exitCode=0 Jan 29 17:22:29 crc kubenswrapper[4746]: I0129 17:22:29.834794 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2255" event={"ID":"5b9029c4-3e1f-476d-8ede-6c56413b5f23","Type":"ContainerDied","Data":"43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410"} Jan 29 17:22:30 crc kubenswrapper[4746]: I0129 17:22:30.850249 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2255" event={"ID":"5b9029c4-3e1f-476d-8ede-6c56413b5f23","Type":"ContainerStarted","Data":"89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee"} Jan 29 17:22:30 crc kubenswrapper[4746]: I0129 17:22:30.870796 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p2255" podStartSLOduration=2.480271721 podStartE2EDuration="4.870780093s" podCreationTimestamp="2026-01-29 17:22:26 +0000 UTC" firstStartedPulling="2026-01-29 17:22:27.823436646 +0000 UTC m=+2870.224021340" lastFinishedPulling="2026-01-29 17:22:30.213945078 +0000 UTC m=+2872.614529712" observedRunningTime="2026-01-29 17:22:30.870379413 +0000 UTC m=+2873.270964067" watchObservedRunningTime="2026-01-29 17:22:30.870780093 +0000 UTC m=+2873.271364727" Jan 29 17:22:31 crc kubenswrapper[4746]: I0129 17:22:31.445490 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:22:31 crc kubenswrapper[4746]: E0129 17:22:31.445759 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:22:36 crc kubenswrapper[4746]: I0129 17:22:36.846287 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:36 crc kubenswrapper[4746]: I0129 17:22:36.846947 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:36 crc kubenswrapper[4746]: I0129 17:22:36.904319 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:36 crc kubenswrapper[4746]: I0129 17:22:36.969017 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:37 crc kubenswrapper[4746]: I0129 17:22:37.152169 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2255"] Jan 29 17:22:38 crc kubenswrapper[4746]: I0129 17:22:38.911414 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p2255" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="registry-server" containerID="cri-o://89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee" gracePeriod=2 Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.315372 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.447357 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx9hw\" (UniqueName: \"kubernetes.io/projected/5b9029c4-3e1f-476d-8ede-6c56413b5f23-kube-api-access-fx9hw\") pod \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.447518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-catalog-content\") pod \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.447547 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-utilities\") pod \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\" (UID: \"5b9029c4-3e1f-476d-8ede-6c56413b5f23\") " Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.448429 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-utilities" (OuterVolumeSpecName: "utilities") pod "5b9029c4-3e1f-476d-8ede-6c56413b5f23" (UID: "5b9029c4-3e1f-476d-8ede-6c56413b5f23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.453361 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9029c4-3e1f-476d-8ede-6c56413b5f23-kube-api-access-fx9hw" (OuterVolumeSpecName: "kube-api-access-fx9hw") pod "5b9029c4-3e1f-476d-8ede-6c56413b5f23" (UID: "5b9029c4-3e1f-476d-8ede-6c56413b5f23"). InnerVolumeSpecName "kube-api-access-fx9hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.473371 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b9029c4-3e1f-476d-8ede-6c56413b5f23" (UID: "5b9029c4-3e1f-476d-8ede-6c56413b5f23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.549457 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.549501 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9029c4-3e1f-476d-8ede-6c56413b5f23-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.549516 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx9hw\" (UniqueName: \"kubernetes.io/projected/5b9029c4-3e1f-476d-8ede-6c56413b5f23-kube-api-access-fx9hw\") on node \"crc\" DevicePath \"\"" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.925485 4746 generic.go:334] "Generic (PLEG): container finished" podID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerID="89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee" exitCode=0 Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.925547 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2255" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.925596 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2255" event={"ID":"5b9029c4-3e1f-476d-8ede-6c56413b5f23","Type":"ContainerDied","Data":"89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee"} Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.926666 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2255" event={"ID":"5b9029c4-3e1f-476d-8ede-6c56413b5f23","Type":"ContainerDied","Data":"af24b6f42dcca05eaca0e7983c6103a4f5c9f951f084eb98983150019b0e0396"} Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.926704 4746 scope.go:117] "RemoveContainer" containerID="89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.950303 4746 scope.go:117] "RemoveContainer" containerID="43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410" Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.974994 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2255"] Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.983124 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2255"] Jan 29 17:22:39 crc kubenswrapper[4746]: I0129 17:22:39.983661 4746 scope.go:117] "RemoveContainer" containerID="d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.014434 4746 scope.go:117] "RemoveContainer" containerID="89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee" Jan 29 17:22:40 crc kubenswrapper[4746]: E0129 17:22:40.014929 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee\": container with ID starting with 89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee not found: ID does not exist" containerID="89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.014974 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee"} err="failed to get container status \"89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee\": rpc error: code = NotFound desc = could not find container \"89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee\": container with ID starting with 89345bebf21c119cc98093bbbca160b8ab99edd4814a97ad1232a74ca0364cee not found: ID does not exist" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.014999 4746 scope.go:117] "RemoveContainer" containerID="43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410" Jan 29 17:22:40 crc kubenswrapper[4746]: E0129 17:22:40.015369 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410\": container with ID starting with 43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410 not found: ID does not exist" containerID="43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.015399 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410"} err="failed to get container status \"43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410\": rpc error: code = NotFound desc = could not find container \"43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410\": container with ID starting with 43868b66a9b2f3e306b4329325c47614592ac4fbdca547e76186e0b1ab153410 not found: ID does not exist" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.015416 4746 scope.go:117] "RemoveContainer" containerID="d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a" Jan 29 17:22:40 crc kubenswrapper[4746]: E0129 17:22:40.015624 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a\": container with ID starting with d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a not found: ID does not exist" containerID="d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.015649 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a"} err="failed to get container status \"d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a\": rpc error: code = NotFound desc = could not find container \"d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a\": container with ID starting with d250006cf01192454da7ae073a0cf6701ffe443fd7ec7d33bc666566604b462a not found: ID does not exist" Jan 29 17:22:40 crc kubenswrapper[4746]: I0129 17:22:40.457139 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" path="/var/lib/kubelet/pods/5b9029c4-3e1f-476d-8ede-6c56413b5f23/volumes" Jan 29 17:22:46 crc kubenswrapper[4746]: I0129 17:22:46.445691 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:22:46 crc kubenswrapper[4746]: E0129 17:22:46.445920 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:23:00 crc kubenswrapper[4746]: I0129 17:23:00.446453 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:23:00 crc kubenswrapper[4746]: E0129 17:23:00.447125 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:23:12 crc kubenswrapper[4746]: I0129 17:23:12.445395 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:23:12 crc kubenswrapper[4746]: E0129 17:23:12.446115 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.021436 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l7njw"] Jan 29 17:23:17 crc kubenswrapper[4746]: E0129 17:23:17.022142 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="extract-content" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.022155 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="extract-content" Jan 29 17:23:17 crc kubenswrapper[4746]: E0129 17:23:17.022163 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="registry-server" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.022169 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="registry-server" Jan 29 17:23:17 crc kubenswrapper[4746]: E0129 17:23:17.022185 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="extract-utilities" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.022218 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="extract-utilities" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.022360 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9029c4-3e1f-476d-8ede-6c56413b5f23" containerName="registry-server" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.023316 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.026274 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-utilities\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.026364 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6nh9\" (UniqueName: \"kubernetes.io/projected/dce72e0a-3332-43c6-b3fd-e503bd7a2849-kube-api-access-j6nh9\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.026598 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-catalog-content\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.045325 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7njw"] Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.127674 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-catalog-content\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.128091 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-utilities\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:17 crc kubenswrapper[4746]: I0129 17:23:17.128244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6nh9\" (UniqueName: \"kubernetes.io/projected/dce72e0a-3332-43c6-b3fd-e503bd7a2849-kube-api-access-j6nh9\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:18 crc kubenswrapper[4746]: I0129 17:23:18.003337 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-catalog-content\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:18 crc kubenswrapper[4746]: I0129 17:23:18.003335 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-utilities\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:18 crc kubenswrapper[4746]: I0129 17:23:18.009650 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6nh9\" (UniqueName: \"kubernetes.io/projected/dce72e0a-3332-43c6-b3fd-e503bd7a2849-kube-api-access-j6nh9\") pod \"community-operators-l7njw\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:18 crc kubenswrapper[4746]: I0129 17:23:18.304694 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:23:18 crc kubenswrapper[4746]: I0129 17:23:18.709468 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l7njw"] Jan 29 17:23:19 crc kubenswrapper[4746]: I0129 17:23:19.213428 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7njw" event={"ID":"dce72e0a-3332-43c6-b3fd-e503bd7a2849","Type":"ContainerStarted","Data":"95bb6ab11f6892e7a44b845e78a49d9d0cd2f0ec617fc86a82aef3aae88d6410"} Jan 29 17:23:20 crc kubenswrapper[4746]: I0129 17:23:20.220404 4746 generic.go:334] "Generic (PLEG): container finished" podID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerID="853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c" exitCode=0 Jan 29 17:23:20 crc kubenswrapper[4746]: I0129 17:23:20.220550 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7njw" event={"ID":"dce72e0a-3332-43c6-b3fd-e503bd7a2849","Type":"ContainerDied","Data":"853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c"} Jan 29 17:23:20 crc kubenswrapper[4746]: E0129 17:23:20.356463 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:23:20 crc kubenswrapper[4746]: E0129 17:23:20.356672 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l7njw_openshift-marketplace(dce72e0a-3332-43c6-b3fd-e503bd7a2849): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:23:20 crc kubenswrapper[4746]: E0129 17:23:20.358733 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:23:21 crc kubenswrapper[4746]: E0129 17:23:21.229458 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:23:23 crc kubenswrapper[4746]: I0129 17:23:23.445763 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:23:23 crc kubenswrapper[4746]: E0129 17:23:23.446008 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:23:36 crc kubenswrapper[4746]: E0129 17:23:36.575803 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:23:36 crc kubenswrapper[4746]: E0129 17:23:36.576528 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l7njw_openshift-marketplace(dce72e0a-3332-43c6-b3fd-e503bd7a2849): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:23:36 crc kubenswrapper[4746]: E0129 17:23:36.578439 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:23:38 crc kubenswrapper[4746]: I0129 17:23:38.450153 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:23:38 crc kubenswrapper[4746]: E0129 17:23:38.450479 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:23:47 crc kubenswrapper[4746]: E0129 17:23:47.448281 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:23:53 crc kubenswrapper[4746]: I0129 17:23:53.445418 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:23:53 crc kubenswrapper[4746]: E0129 17:23:53.445864 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:24:01 crc kubenswrapper[4746]: E0129 17:24:01.569753 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:24:01 crc kubenswrapper[4746]: E0129 17:24:01.570353 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l7njw_openshift-marketplace(dce72e0a-3332-43c6-b3fd-e503bd7a2849): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:24:01 crc kubenswrapper[4746]: E0129 17:24:01.571542 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:24:04 crc kubenswrapper[4746]: I0129 17:24:04.446784 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:24:04 crc kubenswrapper[4746]: E0129 17:24:04.447275 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:24:12 crc kubenswrapper[4746]: E0129 17:24:12.448514 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:24:17 crc kubenswrapper[4746]: I0129 17:24:17.445949 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:24:17 crc kubenswrapper[4746]: E0129 17:24:17.446864 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:24:23 crc kubenswrapper[4746]: E0129 17:24:23.447982 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:24:30 crc kubenswrapper[4746]: I0129 17:24:30.445718 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:24:30 crc kubenswrapper[4746]: E0129 17:24:30.446438 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:24:38 crc kubenswrapper[4746]: E0129 17:24:38.459481 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:24:43 crc kubenswrapper[4746]: I0129 17:24:43.445530 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:24:43 crc kubenswrapper[4746]: E0129 17:24:43.446099 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:24:49 crc kubenswrapper[4746]: E0129 17:24:49.968553 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:24:49 crc kubenswrapper[4746]: E0129 17:24:49.969285 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l7njw_openshift-marketplace(dce72e0a-3332-43c6-b3fd-e503bd7a2849): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:24:49 crc kubenswrapper[4746]: E0129 17:24:49.970673 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:24:54 crc kubenswrapper[4746]: I0129 17:24:54.445793 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:24:54 crc kubenswrapper[4746]: E0129 17:24:54.446386 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:25:04 crc kubenswrapper[4746]: E0129 17:25:04.449723 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:25:09 crc kubenswrapper[4746]: I0129 17:25:09.446671 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:25:09 crc kubenswrapper[4746]: E0129 17:25:09.447585 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:25:19 crc kubenswrapper[4746]: E0129 17:25:19.450295 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:25:21 crc kubenswrapper[4746]: I0129 17:25:21.446532 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:25:21 crc kubenswrapper[4746]: E0129 17:25:21.447430 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:25:34 crc kubenswrapper[4746]: I0129 17:25:34.445282 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:25:34 crc kubenswrapper[4746]: E0129 17:25:34.445977 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:25:34 crc kubenswrapper[4746]: E0129 17:25:34.447579 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:25:45 crc kubenswrapper[4746]: E0129 17:25:45.448242 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:25:46 crc kubenswrapper[4746]: I0129 17:25:46.445654 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:25:46 crc kubenswrapper[4746]: E0129 17:25:46.445986 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:26:00 crc kubenswrapper[4746]: I0129 17:26:00.446653 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:26:00 crc kubenswrapper[4746]: E0129 17:26:00.447305 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:26:00 crc kubenswrapper[4746]: E0129 17:26:00.449808 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:26:12 crc kubenswrapper[4746]: I0129 17:26:12.445956 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:26:12 crc kubenswrapper[4746]: E0129 17:26:12.446823 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:26:15 crc kubenswrapper[4746]: E0129 17:26:15.570075 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:26:15 crc kubenswrapper[4746]: E0129 17:26:15.570258 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l7njw_openshift-marketplace(dce72e0a-3332-43c6-b3fd-e503bd7a2849): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:26:15 crc kubenswrapper[4746]: E0129 17:26:15.571400 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:26:25 crc kubenswrapper[4746]: I0129 17:26:25.445627 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:26:25 crc kubenswrapper[4746]: E0129 17:26:25.446459 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:26:29 crc kubenswrapper[4746]: E0129 17:26:29.448676 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:26:36 crc kubenswrapper[4746]: I0129 17:26:36.445320 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:26:36 crc kubenswrapper[4746]: E0129 17:26:36.446154 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:26:40 crc kubenswrapper[4746]: E0129 17:26:40.448086 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:26:50 crc kubenswrapper[4746]: I0129 17:26:50.446516 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:26:50 crc kubenswrapper[4746]: E0129 17:26:50.447349 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:26:54 crc kubenswrapper[4746]: E0129 17:26:54.448181 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:27:02 crc kubenswrapper[4746]: I0129 17:27:02.445734 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:27:02 crc kubenswrapper[4746]: E0129 17:27:02.446462 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:27:05 crc kubenswrapper[4746]: E0129 17:27:05.447447 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:27:16 crc kubenswrapper[4746]: I0129 17:27:16.446019 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:27:16 crc kubenswrapper[4746]: E0129 17:27:16.446908 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:27:16 crc kubenswrapper[4746]: E0129 17:27:16.448770 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.276149 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tgbq5/must-gather-zwvdn"] Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.278088 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.281769 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tgbq5"/"default-dockercfg-czf48" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.281965 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tgbq5"/"kube-root-ca.crt" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.283139 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tgbq5"/"openshift-service-ca.crt" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.305668 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tgbq5/must-gather-zwvdn"] Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.407321 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp6p2\" (UniqueName: \"kubernetes.io/projected/4429c693-c16c-4c9c-9cfa-6a827c540811-kube-api-access-jp6p2\") pod \"must-gather-zwvdn\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.407409 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4429c693-c16c-4c9c-9cfa-6a827c540811-must-gather-output\") pod \"must-gather-zwvdn\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.508882 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp6p2\" (UniqueName: \"kubernetes.io/projected/4429c693-c16c-4c9c-9cfa-6a827c540811-kube-api-access-jp6p2\") pod \"must-gather-zwvdn\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.509012 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4429c693-c16c-4c9c-9cfa-6a827c540811-must-gather-output\") pod \"must-gather-zwvdn\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.509478 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4429c693-c16c-4c9c-9cfa-6a827c540811-must-gather-output\") pod \"must-gather-zwvdn\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.526855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp6p2\" (UniqueName: \"kubernetes.io/projected/4429c693-c16c-4c9c-9cfa-6a827c540811-kube-api-access-jp6p2\") pod \"must-gather-zwvdn\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:21 crc kubenswrapper[4746]: I0129 17:27:21.598303 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:27:22 crc kubenswrapper[4746]: I0129 17:27:22.009526 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tgbq5/must-gather-zwvdn"] Jan 29 17:27:22 crc kubenswrapper[4746]: I0129 17:27:22.017347 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:27:22 crc kubenswrapper[4746]: I0129 17:27:22.884069 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" event={"ID":"4429c693-c16c-4c9c-9cfa-6a827c540811","Type":"ContainerStarted","Data":"823a09672e4014b86c490ba6e8b8a48267da865c972252d08cfeba7601dd1d05"} Jan 29 17:27:28 crc kubenswrapper[4746]: I0129 17:27:28.451935 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:27:28 crc kubenswrapper[4746]: I0129 17:27:28.925338 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"43dba09b67eb01818ddc17ff2d11e06359f9a659fcd0477355e15b7687688ca2"} Jan 29 17:27:32 crc kubenswrapper[4746]: E0129 17:27:32.738458 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:27:33 crc kubenswrapper[4746]: I0129 17:27:33.961115 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" event={"ID":"4429c693-c16c-4c9c-9cfa-6a827c540811","Type":"ContainerStarted","Data":"86169f04b3e00b5588bb0d80e383ea6db1cca9f377463e563f3aca929c2b71e5"} Jan 29 17:27:33 crc kubenswrapper[4746]: I0129 17:27:33.961713 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" event={"ID":"4429c693-c16c-4c9c-9cfa-6a827c540811","Type":"ContainerStarted","Data":"5ffeecc6e5d0c5926eb4ee98ab14260f3cca4d073eb37b2e6fdf382e11031151"} Jan 29 17:27:33 crc kubenswrapper[4746]: I0129 17:27:33.979764 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" podStartSLOduration=1.970243317 podStartE2EDuration="12.979731699s" podCreationTimestamp="2026-01-29 17:27:21 +0000 UTC" firstStartedPulling="2026-01-29 17:27:22.01693485 +0000 UTC m=+3164.417519514" lastFinishedPulling="2026-01-29 17:27:33.026423252 +0000 UTC m=+3175.427007896" observedRunningTime="2026-01-29 17:27:33.975451882 +0000 UTC m=+3176.376036536" watchObservedRunningTime="2026-01-29 17:27:33.979731699 +0000 UTC m=+3176.380316343" Jan 29 17:27:43 crc kubenswrapper[4746]: E0129 17:27:43.448027 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:27:57 crc kubenswrapper[4746]: E0129 17:27:57.447277 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:28:08 crc kubenswrapper[4746]: E0129 17:28:08.452776 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:28:23 crc kubenswrapper[4746]: E0129 17:28:23.447613 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:28:28 crc kubenswrapper[4746]: I0129 17:28:28.850389 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/util/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.008252 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/util/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.104728 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/pull/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.112290 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/pull/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.251669 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/util/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.262103 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/pull/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.318240 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1qlzt8_7ca75f8d-6a20-4e49-8d94-a5f7159239cd/extract/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.488699 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-2bww8_dd135c2c-9e2d-434e-82b0-8f5a8bbd0123/manager/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.556155 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-52kgv_40e94942-ca30-4f19-b3bb-0dd32a419bb4/manager/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.859648 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-kgx8p_eaf582f0-a5ab-4ec3-8171-d7800c624ef9/manager/0.log" Jan 29 17:28:29 crc kubenswrapper[4746]: I0129 17:28:29.977802 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-95h56_cc3d7c3e-3d38-43f3-92ce-e4696ed6e776/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.078240 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-lzcpg_8d781faa-902f-41cf-ab3a-ad07d2322345/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.169925 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-9r8qw_7629581f-7c9b-4b2a-9296-3afc1abca26c/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.386909 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-xqxxr_50aedec4-5794-4aac-ae6d-32c393128b8b/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.507880 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-q94c9_26375595-f5e1-4568-ac8b-8db08398d97a/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.645747 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-l886h_d72b4019-1caf-4f5e-8324-790ff6d0c4b1/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.660851 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-ggshz_df82c829-be95-477f-a566-2dec382d4598/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.800466 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-74gtz_c61df0fc-95b5-4b58-9801-75325f20e182/manager/0.log" Jan 29 17:28:30 crc kubenswrapper[4746]: I0129 17:28:30.896875 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-x4k9t_ada28b6b-5615-4bc1-ba2e-f1ab3408b64c/manager/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.094409 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-h6pgd_4427df73-583c-45ea-b592-0d282ac0b2d7/manager/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.128555 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-2t2tl_3462ff04-ad8e-4f2d-872a-26bb98e59484/manager/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.260405 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-86dfb79cc7v2zp5_38062c77-be0b-4138-b77e-330e2dd20cc0/manager/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.415513 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-757f46c65d-znnl8_ecd5da6c-5219-47b2-b290-402277b01934/operator/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.654730 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-tnxns_3819ba4d-9ba5-40e9-ada2-d444d9a80bb5/registry-server/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.812670 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-rdtrn_51559923-8ee0-41ae-b204-c7de34da4745/manager/0.log" Jan 29 17:28:31 crc kubenswrapper[4746]: I0129 17:28:31.880117 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-kzmwp_de3fc290-5f51-4b4e-845e-5f1020dd31bc/manager/0.log" Jan 29 17:28:32 crc kubenswrapper[4746]: I0129 17:28:32.132237 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lw9ng_8f22c3e7-ca4b-471c-8c28-6d817f25582d/operator/0.log" Jan 29 17:28:32 crc kubenswrapper[4746]: I0129 17:28:32.199245 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b6f655c79-fffl2_4157a634-ad19-42ac-9ef4-7249fe50798f/manager/0.log" Jan 29 17:28:32 crc kubenswrapper[4746]: I0129 17:28:32.264484 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-t88fd_30bcd237-b2b9-4c61-974b-bca62a288e84/manager/0.log" Jan 29 17:28:32 crc kubenswrapper[4746]: I0129 17:28:32.359307 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ls2m7_257b1191-ee6f-42f3-9894-78f1a43cfd3d/manager/0.log" Jan 29 17:28:32 crc kubenswrapper[4746]: I0129 17:28:32.445040 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-rmnff_07d0b8e0-e804-4007-a924-bdc50e4c1843/manager/0.log" Jan 29 17:28:32 crc kubenswrapper[4746]: I0129 17:28:32.560162 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-cdm9l_d3b1117b-c064-4425-94ff-7d9b2ff94b8d/manager/0.log" Jan 29 17:28:36 crc kubenswrapper[4746]: E0129 17:28:36.448614 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:28:50 crc kubenswrapper[4746]: I0129 17:28:50.036451 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gkxmb_b1d40aef-51e1-48d1-ac44-5ca93dd7b612/control-plane-machine-set-operator/0.log" Jan 29 17:28:50 crc kubenswrapper[4746]: I0129 17:28:50.154105 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t4j2d_3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3/kube-rbac-proxy/0.log" Jan 29 17:28:50 crc kubenswrapper[4746]: I0129 17:28:50.209559 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t4j2d_3b01c2b3-bb70-44e1-90ba-78ebb1cb97d3/machine-api-operator/0.log" Jan 29 17:28:50 crc kubenswrapper[4746]: E0129 17:28:50.448398 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:29:01 crc kubenswrapper[4746]: I0129 17:29:01.338715 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-tl7pv_d3df6da0-1959-41ce-a71b-1546e4752437/cert-manager-controller/0.log" Jan 29 17:29:01 crc kubenswrapper[4746]: I0129 17:29:01.454445 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-n7fr7_5cff161f-24ec-499d-bcef-c964f4b40972/cert-manager-cainjector/0.log" Jan 29 17:29:01 crc kubenswrapper[4746]: I0129 17:29:01.555233 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-qf9b8_6c6fa8e6-a609-4dd5-896f-a2e6d134b671/cert-manager-webhook/0.log" Jan 29 17:29:01 crc kubenswrapper[4746]: E0129 17:29:01.566928 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:29:01 crc kubenswrapper[4746]: E0129 17:29:01.567106 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6nh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l7njw_openshift-marketplace(dce72e0a-3332-43c6-b3fd-e503bd7a2849): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:29:01 crc kubenswrapper[4746]: E0129 17:29:01.568247 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:29:13 crc kubenswrapper[4746]: E0129 17:29:13.448980 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:29:13 crc kubenswrapper[4746]: I0129 17:29:13.519451 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-grjr9_c27ce050-adb3-4698-a773-248eba35e281/nmstate-console-plugin/0.log" Jan 29 17:29:13 crc kubenswrapper[4746]: I0129 17:29:13.871592 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zvdqn_4c62df53-5227-44f8-b8b4-dd208f98ca28/nmstate-handler/0.log" Jan 29 17:29:13 crc kubenswrapper[4746]: I0129 17:29:13.909011 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-r8rvf_9e668f5d-ff4a-4ca4-801f-e45e15354829/kube-rbac-proxy/0.log" Jan 29 17:29:14 crc kubenswrapper[4746]: I0129 17:29:14.024533 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-r8rvf_9e668f5d-ff4a-4ca4-801f-e45e15354829/nmstate-metrics/0.log" Jan 29 17:29:14 crc kubenswrapper[4746]: I0129 17:29:14.051783 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-5zhtq_52ea40af-55b3-41e8-9afd-314054287d7d/nmstate-operator/0.log" Jan 29 17:29:14 crc kubenswrapper[4746]: I0129 17:29:14.206965 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-27h58_ea9bc95d-aaa3-4050-8840-868eb964a03a/nmstate-webhook/0.log" Jan 29 17:29:26 crc kubenswrapper[4746]: E0129 17:29:26.447507 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:29:37 crc kubenswrapper[4746]: E0129 17:29:37.448544 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:29:39 crc kubenswrapper[4746]: I0129 17:29:39.758569 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sllnl_91614fd8-907a-4093-b290-20c533e82be5/kube-rbac-proxy/0.log" Jan 29 17:29:39 crc kubenswrapper[4746]: I0129 17:29:39.994992 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-frr-files/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.140134 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-frr-files/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.204699 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-reloader/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.247501 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-metrics/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.330441 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sllnl_91614fd8-907a-4093-b290-20c533e82be5/controller/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.348497 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-reloader/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.528829 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-frr-files/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.529852 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-reloader/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.542290 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-metrics/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.562706 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-metrics/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.741635 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-reloader/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.761223 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-frr-files/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.765145 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/cp-metrics/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.807664 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/controller/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.927785 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/frr-metrics/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.941309 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/kube-rbac-proxy/0.log" Jan 29 17:29:40 crc kubenswrapper[4746]: I0129 17:29:40.991509 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/kube-rbac-proxy-frr/0.log" Jan 29 17:29:41 crc kubenswrapper[4746]: I0129 17:29:41.124400 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/reloader/0.log" Jan 29 17:29:41 crc kubenswrapper[4746]: I0129 17:29:41.177352 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7z9cv_1bad14f2-00a0-4101-880f-fa01992db9d6/frr-k8s-webhook-server/0.log" Jan 29 17:29:41 crc kubenswrapper[4746]: I0129 17:29:41.350330 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-59c79db488-76xvk_8d178b7f-e0fd-45ce-a609-e65c58e026ee/manager/0.log" Jan 29 17:29:41 crc kubenswrapper[4746]: I0129 17:29:41.494312 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d57bc96cc-5fl5w_92209676-8aa8-4779-85ab-ff8f430449b2/webhook-server/0.log" Jan 29 17:29:41 crc kubenswrapper[4746]: I0129 17:29:41.626709 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v2k9q_b0e51f0a-824b-407e-ad15-09190e437c74/kube-rbac-proxy/0.log" Jan 29 17:29:42 crc kubenswrapper[4746]: I0129 17:29:42.155286 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v2k9q_b0e51f0a-824b-407e-ad15-09190e437c74/speaker/0.log" Jan 29 17:29:42 crc kubenswrapper[4746]: I0129 17:29:42.161831 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8qp6_f54d72a4-5843-4b08-baf7-86689474f3e2/frr/0.log" Jan 29 17:29:49 crc kubenswrapper[4746]: I0129 17:29:49.065132 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:29:49 crc kubenswrapper[4746]: I0129 17:29:49.065698 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:29:49 crc kubenswrapper[4746]: E0129 17:29:49.448182 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.090391 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/util/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.263813 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/util/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.316622 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/pull/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.325935 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/pull/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.486543 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/util/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.488383 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/pull/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.490006 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczccrc_bd0a4e1f-f8dc-4029-bdf8-90f6aedaef46/extract/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.635978 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/util/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.798180 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/pull/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.807364 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/util/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.831644 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/pull/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.974901 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/util/0.log" Jan 29 17:29:55 crc kubenswrapper[4746]: I0129 17:29:55.981021 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/pull/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.002239 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713zcxjw_1bc178bb-5ffb-4d68-b022-6b2025b2bfcb/extract/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.125338 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/util/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.318226 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/util/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.329466 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/pull/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.352648 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/pull/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.479139 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/util/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.511095 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/extract/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.533835 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e57l6r7_e4aa0089-6cc2-419f-9d55-3ad1c8e9a7f0/pull/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.640661 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/extract-utilities/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.818881 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/extract-content/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.850639 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/extract-utilities/0.log" Jan 29 17:29:56 crc kubenswrapper[4746]: I0129 17:29:56.875527 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/extract-content/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.042908 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/extract-utilities/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.095087 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/extract-content/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.231326 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l7njw_dce72e0a-3332-43c6-b3fd-e503bd7a2849/extract-utilities/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.531444 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l7njw_dce72e0a-3332-43c6-b3fd-e503bd7a2849/extract-utilities/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.552889 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-98p6v_da150909-e111-42fc-aa44-a4e181c3e57a/registry-server/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.723781 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-l7njw_dce72e0a-3332-43c6-b3fd-e503bd7a2849/extract-utilities/0.log" Jan 29 17:29:57 crc kubenswrapper[4746]: I0129 17:29:57.865052 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.016708 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.056496 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/extract-content/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.056676 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/extract-content/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.220446 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/extract-content/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.235658 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.418767 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qmpr4_a75f7336-fc5b-42b8-8315-d2ec3025832b/marketplace-operator/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.513276 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.746409 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wjcnr_d26e9e37-91e8-407b-b16d-2ce68fa11f2d/registry-server/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.760806 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.785647 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/extract-content/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.805598 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/extract-content/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.944638 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.983901 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/extract-utilities/0.log" Jan 29 17:29:58 crc kubenswrapper[4746]: I0129 17:29:58.986078 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/extract-content/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.080594 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfcrm_6c5d371b-e906-4098-b72c-7b41c5fd2ec6/registry-server/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.159922 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/extract-utilities/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.180406 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/extract-content/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.193408 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/extract-content/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.325694 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/extract-utilities/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.366497 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/extract-content/0.log" Jan 29 17:29:59 crc kubenswrapper[4746]: I0129 17:29:59.747619 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jrwjj_a2a7e5df-24f7-4400-93be-0c812a827d15/registry-server/0.log" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.151630 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8"] Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.153339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.155173 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.155606 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.160829 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8"] Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.198535 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-secret-volume\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.198738 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6s6\" (UniqueName: \"kubernetes.io/projected/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-kube-api-access-rb6s6\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.198946 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-config-volume\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.300203 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-config-volume\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.300294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-secret-volume\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.300370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6s6\" (UniqueName: \"kubernetes.io/projected/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-kube-api-access-rb6s6\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.301323 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-config-volume\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.308316 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-secret-volume\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.319835 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6s6\" (UniqueName: \"kubernetes.io/projected/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-kube-api-access-rb6s6\") pod \"collect-profiles-29495130-hm9f8\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.471798 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.902456 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8"] Jan 29 17:30:00 crc kubenswrapper[4746]: I0129 17:30:00.918931 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" event={"ID":"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689","Type":"ContainerStarted","Data":"3076f67994c93aabdf156e8d958dbd679cb36acc03a2c90a548fee7ab9ea94b2"} Jan 29 17:30:01 crc kubenswrapper[4746]: I0129 17:30:01.928873 4746 generic.go:334] "Generic (PLEG): container finished" podID="f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" containerID="a38f51b132c348dc7d1b09b4685345f71c6b899accd9ca2cd867e8f42c44d90c" exitCode=0 Jan 29 17:30:01 crc kubenswrapper[4746]: I0129 17:30:01.928919 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" event={"ID":"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689","Type":"ContainerDied","Data":"a38f51b132c348dc7d1b09b4685345f71c6b899accd9ca2cd867e8f42c44d90c"} Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.231543 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.348033 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb6s6\" (UniqueName: \"kubernetes.io/projected/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-kube-api-access-rb6s6\") pod \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.348118 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-secret-volume\") pod \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.348155 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-config-volume\") pod \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\" (UID: \"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689\") " Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.349047 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-config-volume" (OuterVolumeSpecName: "config-volume") pod "f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" (UID: "f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.353466 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-kube-api-access-rb6s6" (OuterVolumeSpecName: "kube-api-access-rb6s6") pod "f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" (UID: "f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689"). InnerVolumeSpecName "kube-api-access-rb6s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.370969 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" (UID: "f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.450010 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb6s6\" (UniqueName: \"kubernetes.io/projected/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-kube-api-access-rb6s6\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.450050 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.450067 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.943491 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" event={"ID":"f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689","Type":"ContainerDied","Data":"3076f67994c93aabdf156e8d958dbd679cb36acc03a2c90a548fee7ab9ea94b2"} Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.943534 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3076f67994c93aabdf156e8d958dbd679cb36acc03a2c90a548fee7ab9ea94b2" Jan 29 17:30:03 crc kubenswrapper[4746]: I0129 17:30:03.943539 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-hm9f8" Jan 29 17:30:04 crc kubenswrapper[4746]: I0129 17:30:04.325832 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp"] Jan 29 17:30:04 crc kubenswrapper[4746]: I0129 17:30:04.346028 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-pxvdp"] Jan 29 17:30:04 crc kubenswrapper[4746]: E0129 17:30:04.448783 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:30:04 crc kubenswrapper[4746]: I0129 17:30:04.457873 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02243086-76e8-4d02-98ce-a7cb0921996a" path="/var/lib/kubelet/pods/02243086-76e8-4d02-98ce-a7cb0921996a/volumes" Jan 29 17:30:16 crc kubenswrapper[4746]: E0129 17:30:16.447920 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:30:19 crc kubenswrapper[4746]: I0129 17:30:19.064848 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:30:19 crc kubenswrapper[4746]: I0129 17:30:19.065105 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:30:28 crc kubenswrapper[4746]: E0129 17:30:28.453390 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:30:42 crc kubenswrapper[4746]: E0129 17:30:42.453710 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:30:42 crc kubenswrapper[4746]: I0129 17:30:42.751575 4746 scope.go:117] "RemoveContainer" containerID="68d5813546e1aff5f8bd8fd1e75c6fb63f8f6788b13eb25d81eda039fa8d65e3" Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.065249 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.066034 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.066083 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.066895 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43dba09b67eb01818ddc17ff2d11e06359f9a659fcd0477355e15b7687688ca2"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.066978 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://43dba09b67eb01818ddc17ff2d11e06359f9a659fcd0477355e15b7687688ca2" gracePeriod=600 Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.266660 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="43dba09b67eb01818ddc17ff2d11e06359f9a659fcd0477355e15b7687688ca2" exitCode=0 Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.266887 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"43dba09b67eb01818ddc17ff2d11e06359f9a659fcd0477355e15b7687688ca2"} Jan 29 17:30:49 crc kubenswrapper[4746]: I0129 17:30:49.267219 4746 scope.go:117] "RemoveContainer" containerID="feb69fc1b3ba038fab0d2bf4c2b11d59686f00cb336c5f149382a4917a832594" Jan 29 17:30:50 crc kubenswrapper[4746]: I0129 17:30:50.275448 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerStarted","Data":"56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676"} Jan 29 17:30:55 crc kubenswrapper[4746]: E0129 17:30:55.448444 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:31:09 crc kubenswrapper[4746]: E0129 17:31:09.449618 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:31:12 crc kubenswrapper[4746]: I0129 17:31:12.452006 4746 generic.go:334] "Generic (PLEG): container finished" podID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerID="5ffeecc6e5d0c5926eb4ee98ab14260f3cca4d073eb37b2e6fdf382e11031151" exitCode=0 Jan 29 17:31:12 crc kubenswrapper[4746]: I0129 17:31:12.453690 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" event={"ID":"4429c693-c16c-4c9c-9cfa-6a827c540811","Type":"ContainerDied","Data":"5ffeecc6e5d0c5926eb4ee98ab14260f3cca4d073eb37b2e6fdf382e11031151"} Jan 29 17:31:12 crc kubenswrapper[4746]: I0129 17:31:12.454315 4746 scope.go:117] "RemoveContainer" containerID="5ffeecc6e5d0c5926eb4ee98ab14260f3cca4d073eb37b2e6fdf382e11031151" Jan 29 17:31:13 crc kubenswrapper[4746]: I0129 17:31:13.362335 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tgbq5_must-gather-zwvdn_4429c693-c16c-4c9c-9cfa-6a827c540811/gather/0.log" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.230529 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tgbq5/must-gather-zwvdn"] Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.231142 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="copy" containerID="cri-o://86169f04b3e00b5588bb0d80e383ea6db1cca9f377463e563f3aca929c2b71e5" gracePeriod=2 Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.237081 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tgbq5/must-gather-zwvdn"] Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.517734 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tgbq5_must-gather-zwvdn_4429c693-c16c-4c9c-9cfa-6a827c540811/copy/0.log" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.518649 4746 generic.go:334] "Generic (PLEG): container finished" podID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerID="86169f04b3e00b5588bb0d80e383ea6db1cca9f377463e563f3aca929c2b71e5" exitCode=143 Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.637387 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tgbq5_must-gather-zwvdn_4429c693-c16c-4c9c-9cfa-6a827c540811/copy/0.log" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.637840 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.714510 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4429c693-c16c-4c9c-9cfa-6a827c540811-must-gather-output\") pod \"4429c693-c16c-4c9c-9cfa-6a827c540811\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.714593 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp6p2\" (UniqueName: \"kubernetes.io/projected/4429c693-c16c-4c9c-9cfa-6a827c540811-kube-api-access-jp6p2\") pod \"4429c693-c16c-4c9c-9cfa-6a827c540811\" (UID: \"4429c693-c16c-4c9c-9cfa-6a827c540811\") " Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.720389 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4429c693-c16c-4c9c-9cfa-6a827c540811-kube-api-access-jp6p2" (OuterVolumeSpecName: "kube-api-access-jp6p2") pod "4429c693-c16c-4c9c-9cfa-6a827c540811" (UID: "4429c693-c16c-4c9c-9cfa-6a827c540811"). InnerVolumeSpecName "kube-api-access-jp6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.802780 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4429c693-c16c-4c9c-9cfa-6a827c540811-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4429c693-c16c-4c9c-9cfa-6a827c540811" (UID: "4429c693-c16c-4c9c-9cfa-6a827c540811"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.815672 4746 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4429c693-c16c-4c9c-9cfa-6a827c540811-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:21 crc kubenswrapper[4746]: I0129 17:31:21.815711 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp6p2\" (UniqueName: \"kubernetes.io/projected/4429c693-c16c-4c9c-9cfa-6a827c540811-kube-api-access-jp6p2\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:22 crc kubenswrapper[4746]: E0129 17:31:22.448652 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:31:22 crc kubenswrapper[4746]: I0129 17:31:22.454414 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" path="/var/lib/kubelet/pods/4429c693-c16c-4c9c-9cfa-6a827c540811/volumes" Jan 29 17:31:22 crc kubenswrapper[4746]: I0129 17:31:22.525412 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tgbq5_must-gather-zwvdn_4429c693-c16c-4c9c-9cfa-6a827c540811/copy/0.log" Jan 29 17:31:22 crc kubenswrapper[4746]: I0129 17:31:22.525830 4746 scope.go:117] "RemoveContainer" containerID="86169f04b3e00b5588bb0d80e383ea6db1cca9f377463e563f3aca929c2b71e5" Jan 29 17:31:22 crc kubenswrapper[4746]: I0129 17:31:22.525881 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tgbq5/must-gather-zwvdn" Jan 29 17:31:22 crc kubenswrapper[4746]: I0129 17:31:22.545620 4746 scope.go:117] "RemoveContainer" containerID="5ffeecc6e5d0c5926eb4ee98ab14260f3cca4d073eb37b2e6fdf382e11031151" Jan 29 17:31:36 crc kubenswrapper[4746]: E0129 17:31:36.449484 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.684767 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vzs9b"] Jan 29 17:31:38 crc kubenswrapper[4746]: E0129 17:31:38.685906 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="gather" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.685929 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="gather" Jan 29 17:31:38 crc kubenswrapper[4746]: E0129 17:31:38.685959 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="copy" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.685972 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="copy" Jan 29 17:31:38 crc kubenswrapper[4746]: E0129 17:31:38.685992 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" containerName="collect-profiles" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.686005 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" containerName="collect-profiles" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.686254 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="gather" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.686285 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4429c693-c16c-4c9c-9cfa-6a827c540811" containerName="copy" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.686309 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a8cf1a-e575-4bd2-b3ab-33a42d5ed689" containerName="collect-profiles" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.687908 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.711562 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzs9b"] Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.758692 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjgx\" (UniqueName: \"kubernetes.io/projected/815d75fe-ab61-4a32-aae2-067a346ef175-kube-api-access-cbjgx\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.758791 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-utilities\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.758833 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-catalog-content\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.860046 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjgx\" (UniqueName: \"kubernetes.io/projected/815d75fe-ab61-4a32-aae2-067a346ef175-kube-api-access-cbjgx\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.860139 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-utilities\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.860204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-catalog-content\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.860968 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-utilities\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.860986 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-catalog-content\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:38 crc kubenswrapper[4746]: I0129 17:31:38.880360 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjgx\" (UniqueName: \"kubernetes.io/projected/815d75fe-ab61-4a32-aae2-067a346ef175-kube-api-access-cbjgx\") pod \"redhat-operators-vzs9b\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:39 crc kubenswrapper[4746]: I0129 17:31:39.012322 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:39 crc kubenswrapper[4746]: I0129 17:31:39.441011 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vzs9b"] Jan 29 17:31:39 crc kubenswrapper[4746]: W0129 17:31:39.444646 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod815d75fe_ab61_4a32_aae2_067a346ef175.slice/crio-4709c1fd72733c1606fb44af4c6d1ca3f7a2d3c0744227e15b0c441323595519 WatchSource:0}: Error finding container 4709c1fd72733c1606fb44af4c6d1ca3f7a2d3c0744227e15b0c441323595519: Status 404 returned error can't find the container with id 4709c1fd72733c1606fb44af4c6d1ca3f7a2d3c0744227e15b0c441323595519 Jan 29 17:31:39 crc kubenswrapper[4746]: I0129 17:31:39.653763 4746 generic.go:334] "Generic (PLEG): container finished" podID="815d75fe-ab61-4a32-aae2-067a346ef175" containerID="3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60" exitCode=0 Jan 29 17:31:39 crc kubenswrapper[4746]: I0129 17:31:39.653813 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerDied","Data":"3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60"} Jan 29 17:31:39 crc kubenswrapper[4746]: I0129 17:31:39.653849 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerStarted","Data":"4709c1fd72733c1606fb44af4c6d1ca3f7a2d3c0744227e15b0c441323595519"} Jan 29 17:31:40 crc kubenswrapper[4746]: I0129 17:31:40.662414 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerStarted","Data":"4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6"} Jan 29 17:31:41 crc kubenswrapper[4746]: I0129 17:31:41.674536 4746 generic.go:334] "Generic (PLEG): container finished" podID="815d75fe-ab61-4a32-aae2-067a346ef175" containerID="4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6" exitCode=0 Jan 29 17:31:41 crc kubenswrapper[4746]: I0129 17:31:41.674621 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerDied","Data":"4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6"} Jan 29 17:31:43 crc kubenswrapper[4746]: I0129 17:31:43.695000 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerStarted","Data":"e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a"} Jan 29 17:31:43 crc kubenswrapper[4746]: I0129 17:31:43.728963 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vzs9b" podStartSLOduration=2.682946493 podStartE2EDuration="5.728936153s" podCreationTimestamp="2026-01-29 17:31:38 +0000 UTC" firstStartedPulling="2026-01-29 17:31:39.655239398 +0000 UTC m=+3422.055824042" lastFinishedPulling="2026-01-29 17:31:42.701229058 +0000 UTC m=+3425.101813702" observedRunningTime="2026-01-29 17:31:43.718722416 +0000 UTC m=+3426.119307070" watchObservedRunningTime="2026-01-29 17:31:43.728936153 +0000 UTC m=+3426.129520817" Jan 29 17:31:48 crc kubenswrapper[4746]: E0129 17:31:48.451363 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:31:49 crc kubenswrapper[4746]: I0129 17:31:49.013148 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:49 crc kubenswrapper[4746]: I0129 17:31:49.013239 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:50 crc kubenswrapper[4746]: I0129 17:31:50.066690 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vzs9b" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="registry-server" probeResult="failure" output=< Jan 29 17:31:50 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 29 17:31:50 crc kubenswrapper[4746]: > Jan 29 17:31:59 crc kubenswrapper[4746]: I0129 17:31:59.082260 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:59 crc kubenswrapper[4746]: I0129 17:31:59.149124 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:31:59 crc kubenswrapper[4746]: I0129 17:31:59.328498 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzs9b"] Jan 29 17:32:00 crc kubenswrapper[4746]: I0129 17:32:00.820603 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vzs9b" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="registry-server" containerID="cri-o://e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a" gracePeriod=2 Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.775956 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.829908 4746 generic.go:334] "Generic (PLEG): container finished" podID="815d75fe-ab61-4a32-aae2-067a346ef175" containerID="e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a" exitCode=0 Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.829972 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerDied","Data":"e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a"} Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.830019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vzs9b" event={"ID":"815d75fe-ab61-4a32-aae2-067a346ef175","Type":"ContainerDied","Data":"4709c1fd72733c1606fb44af4c6d1ca3f7a2d3c0744227e15b0c441323595519"} Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.830022 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vzs9b" Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.830042 4746 scope.go:117] "RemoveContainer" containerID="e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a" Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.900608 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-catalog-content\") pod \"815d75fe-ab61-4a32-aae2-067a346ef175\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.900692 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjgx\" (UniqueName: \"kubernetes.io/projected/815d75fe-ab61-4a32-aae2-067a346ef175-kube-api-access-cbjgx\") pod \"815d75fe-ab61-4a32-aae2-067a346ef175\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.900798 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-utilities\") pod \"815d75fe-ab61-4a32-aae2-067a346ef175\" (UID: \"815d75fe-ab61-4a32-aae2-067a346ef175\") " Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.901629 4746 scope.go:117] "RemoveContainer" containerID="4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6" Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.901717 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-utilities" (OuterVolumeSpecName: "utilities") pod "815d75fe-ab61-4a32-aae2-067a346ef175" (UID: "815d75fe-ab61-4a32-aae2-067a346ef175"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.922460 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815d75fe-ab61-4a32-aae2-067a346ef175-kube-api-access-cbjgx" (OuterVolumeSpecName: "kube-api-access-cbjgx") pod "815d75fe-ab61-4a32-aae2-067a346ef175" (UID: "815d75fe-ab61-4a32-aae2-067a346ef175"). InnerVolumeSpecName "kube-api-access-cbjgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:32:01 crc kubenswrapper[4746]: I0129 17:32:01.993398 4746 scope.go:117] "RemoveContainer" containerID="3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.002517 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.002558 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjgx\" (UniqueName: \"kubernetes.io/projected/815d75fe-ab61-4a32-aae2-067a346ef175-kube-api-access-cbjgx\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.017470 4746 scope.go:117] "RemoveContainer" containerID="e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a" Jan 29 17:32:02 crc kubenswrapper[4746]: E0129 17:32:02.021684 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a\": container with ID starting with e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a not found: ID does not exist" containerID="e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.021740 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a"} err="failed to get container status \"e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a\": rpc error: code = NotFound desc = could not find container \"e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a\": container with ID starting with e53719f3bdd769f40aecbc93f609d218cd738964b4f4fddac7b11a49f16bb89a not found: ID does not exist" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.021779 4746 scope.go:117] "RemoveContainer" containerID="4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6" Jan 29 17:32:02 crc kubenswrapper[4746]: E0129 17:32:02.022427 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6\": container with ID starting with 4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6 not found: ID does not exist" containerID="4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.022469 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6"} err="failed to get container status \"4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6\": rpc error: code = NotFound desc = could not find container \"4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6\": container with ID starting with 4733d515610ecf3e3c8be4b40ae9d43be561ef7f99f26afc7f339d2d8dd5f5b6 not found: ID does not exist" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.022496 4746 scope.go:117] "RemoveContainer" containerID="3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60" Jan 29 17:32:02 crc kubenswrapper[4746]: E0129 17:32:02.022855 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60\": container with ID starting with 3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60 not found: ID does not exist" containerID="3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.022883 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60"} err="failed to get container status \"3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60\": rpc error: code = NotFound desc = could not find container \"3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60\": container with ID starting with 3890d5bbc6470f91c0cb27b8476c343f917c41ec7fc33b7c09f7afbf29d07d60 not found: ID does not exist" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.062296 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "815d75fe-ab61-4a32-aae2-067a346ef175" (UID: "815d75fe-ab61-4a32-aae2-067a346ef175"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.103608 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d75fe-ab61-4a32-aae2-067a346ef175-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.163674 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vzs9b"] Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.172717 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vzs9b"] Jan 29 17:32:02 crc kubenswrapper[4746]: E0129 17:32:02.447621 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:32:02 crc kubenswrapper[4746]: I0129 17:32:02.454596 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" path="/var/lib/kubelet/pods/815d75fe-ab61-4a32-aae2-067a346ef175/volumes" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.950629 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kszj6"] Jan 29 17:32:10 crc kubenswrapper[4746]: E0129 17:32:10.951669 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="extract-content" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.951683 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="extract-content" Jan 29 17:32:10 crc kubenswrapper[4746]: E0129 17:32:10.951700 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="extract-utilities" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.951707 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="extract-utilities" Jan 29 17:32:10 crc kubenswrapper[4746]: E0129 17:32:10.951725 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="registry-server" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.951732 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="registry-server" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.951877 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="815d75fe-ab61-4a32-aae2-067a346ef175" containerName="registry-server" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.952803 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:10 crc kubenswrapper[4746]: I0129 17:32:10.972521 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kszj6"] Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.140719 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9pl\" (UniqueName: \"kubernetes.io/projected/6edd6557-0b51-40fe-86bb-8f1f3af55b10-kube-api-access-9c9pl\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.140826 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-utilities\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.140859 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-catalog-content\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.242253 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-utilities\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.242303 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-catalog-content\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.242368 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9pl\" (UniqueName: \"kubernetes.io/projected/6edd6557-0b51-40fe-86bb-8f1f3af55b10-kube-api-access-9c9pl\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.242843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-utilities\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.242881 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-catalog-content\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.269016 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9pl\" (UniqueName: \"kubernetes.io/projected/6edd6557-0b51-40fe-86bb-8f1f3af55b10-kube-api-access-9c9pl\") pod \"certified-operators-kszj6\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.277710 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.569311 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kszj6"] Jan 29 17:32:11 crc kubenswrapper[4746]: W0129 17:32:11.578560 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6edd6557_0b51_40fe_86bb_8f1f3af55b10.slice/crio-e237c8218ddf047a5efd938caebcd2f7ddf0126be3c64123c45c78b47f82b8d6 WatchSource:0}: Error finding container e237c8218ddf047a5efd938caebcd2f7ddf0126be3c64123c45c78b47f82b8d6: Status 404 returned error can't find the container with id e237c8218ddf047a5efd938caebcd2f7ddf0126be3c64123c45c78b47f82b8d6 Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.913630 4746 generic.go:334] "Generic (PLEG): container finished" podID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerID="a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201" exitCode=0 Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.913671 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kszj6" event={"ID":"6edd6557-0b51-40fe-86bb-8f1f3af55b10","Type":"ContainerDied","Data":"a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201"} Jan 29 17:32:11 crc kubenswrapper[4746]: I0129 17:32:11.913707 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kszj6" event={"ID":"6edd6557-0b51-40fe-86bb-8f1f3af55b10","Type":"ContainerStarted","Data":"e237c8218ddf047a5efd938caebcd2f7ddf0126be3c64123c45c78b47f82b8d6"} Jan 29 17:32:13 crc kubenswrapper[4746]: I0129 17:32:13.934158 4746 generic.go:334] "Generic (PLEG): container finished" podID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerID="1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8" exitCode=0 Jan 29 17:32:13 crc kubenswrapper[4746]: I0129 17:32:13.934237 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kszj6" event={"ID":"6edd6557-0b51-40fe-86bb-8f1f3af55b10","Type":"ContainerDied","Data":"1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8"} Jan 29 17:32:14 crc kubenswrapper[4746]: I0129 17:32:14.945247 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kszj6" event={"ID":"6edd6557-0b51-40fe-86bb-8f1f3af55b10","Type":"ContainerStarted","Data":"43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b"} Jan 29 17:32:14 crc kubenswrapper[4746]: I0129 17:32:14.975445 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kszj6" podStartSLOduration=2.566668479 podStartE2EDuration="4.975425543s" podCreationTimestamp="2026-01-29 17:32:10 +0000 UTC" firstStartedPulling="2026-01-29 17:32:11.915531757 +0000 UTC m=+3454.316116401" lastFinishedPulling="2026-01-29 17:32:14.324288811 +0000 UTC m=+3456.724873465" observedRunningTime="2026-01-29 17:32:14.969856472 +0000 UTC m=+3457.370441146" watchObservedRunningTime="2026-01-29 17:32:14.975425543 +0000 UTC m=+3457.376010197" Jan 29 17:32:16 crc kubenswrapper[4746]: E0129 17:32:16.448041 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:32:21 crc kubenswrapper[4746]: I0129 17:32:21.278538 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:21 crc kubenswrapper[4746]: I0129 17:32:21.278862 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:21 crc kubenswrapper[4746]: I0129 17:32:21.360766 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:22 crc kubenswrapper[4746]: I0129 17:32:22.064110 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.124809 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kszj6"] Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.125570 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kszj6" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="registry-server" containerID="cri-o://43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b" gracePeriod=2 Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.527431 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.657937 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-catalog-content\") pod \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.657990 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-utilities\") pod \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.658077 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c9pl\" (UniqueName: \"kubernetes.io/projected/6edd6557-0b51-40fe-86bb-8f1f3af55b10-kube-api-access-9c9pl\") pod \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\" (UID: \"6edd6557-0b51-40fe-86bb-8f1f3af55b10\") " Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.658949 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-utilities" (OuterVolumeSpecName: "utilities") pod "6edd6557-0b51-40fe-86bb-8f1f3af55b10" (UID: "6edd6557-0b51-40fe-86bb-8f1f3af55b10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.662956 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edd6557-0b51-40fe-86bb-8f1f3af55b10-kube-api-access-9c9pl" (OuterVolumeSpecName: "kube-api-access-9c9pl") pod "6edd6557-0b51-40fe-86bb-8f1f3af55b10" (UID: "6edd6557-0b51-40fe-86bb-8f1f3af55b10"). InnerVolumeSpecName "kube-api-access-9c9pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.707027 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6edd6557-0b51-40fe-86bb-8f1f3af55b10" (UID: "6edd6557-0b51-40fe-86bb-8f1f3af55b10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.760072 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.760116 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6edd6557-0b51-40fe-86bb-8f1f3af55b10-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:25 crc kubenswrapper[4746]: I0129 17:32:25.760126 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c9pl\" (UniqueName: \"kubernetes.io/projected/6edd6557-0b51-40fe-86bb-8f1f3af55b10-kube-api-access-9c9pl\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.022563 4746 generic.go:334] "Generic (PLEG): container finished" podID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerID="43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b" exitCode=0 Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.022613 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kszj6" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.022658 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kszj6" event={"ID":"6edd6557-0b51-40fe-86bb-8f1f3af55b10","Type":"ContainerDied","Data":"43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b"} Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.023135 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kszj6" event={"ID":"6edd6557-0b51-40fe-86bb-8f1f3af55b10","Type":"ContainerDied","Data":"e237c8218ddf047a5efd938caebcd2f7ddf0126be3c64123c45c78b47f82b8d6"} Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.023168 4746 scope.go:117] "RemoveContainer" containerID="43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.044050 4746 scope.go:117] "RemoveContainer" containerID="1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.056803 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kszj6"] Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.066963 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kszj6"] Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.068122 4746 scope.go:117] "RemoveContainer" containerID="a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.093788 4746 scope.go:117] "RemoveContainer" containerID="43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b" Jan 29 17:32:26 crc kubenswrapper[4746]: E0129 17:32:26.094228 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b\": container with ID starting with 43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b not found: ID does not exist" containerID="43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.094312 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b"} err="failed to get container status \"43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b\": rpc error: code = NotFound desc = could not find container \"43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b\": container with ID starting with 43265061fb652ea185bb60b73ab346b6c1c664f3f9a9b15d6db1dae56fceb23b not found: ID does not exist" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.094366 4746 scope.go:117] "RemoveContainer" containerID="1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8" Jan 29 17:32:26 crc kubenswrapper[4746]: E0129 17:32:26.094856 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8\": container with ID starting with 1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8 not found: ID does not exist" containerID="1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.094889 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8"} err="failed to get container status \"1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8\": rpc error: code = NotFound desc = could not find container \"1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8\": container with ID starting with 1ec5a40fa76a4a6eb5af36bfbd31c7268ee3daf686c1b94148c4874cf772d9b8 not found: ID does not exist" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.094909 4746 scope.go:117] "RemoveContainer" containerID="a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201" Jan 29 17:32:26 crc kubenswrapper[4746]: E0129 17:32:26.095437 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201\": container with ID starting with a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201 not found: ID does not exist" containerID="a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.095493 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201"} err="failed to get container status \"a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201\": rpc error: code = NotFound desc = could not find container \"a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201\": container with ID starting with a0667539b18869d7de81c552ffac492dd7b3bebba59adad093e0b4baeccfe201 not found: ID does not exist" Jan 29 17:32:26 crc kubenswrapper[4746]: I0129 17:32:26.454283 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" path="/var/lib/kubelet/pods/6edd6557-0b51-40fe-86bb-8f1f3af55b10/volumes" Jan 29 17:32:28 crc kubenswrapper[4746]: E0129 17:32:28.462062 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:32:39 crc kubenswrapper[4746]: E0129 17:32:39.449122 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:32:49 crc kubenswrapper[4746]: I0129 17:32:49.065675 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:32:49 crc kubenswrapper[4746]: I0129 17:32:49.066329 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:32:53 crc kubenswrapper[4746]: E0129 17:32:53.447566 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:33:07 crc kubenswrapper[4746]: E0129 17:33:07.448744 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:33:18 crc kubenswrapper[4746]: E0129 17:33:18.452964 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:33:19 crc kubenswrapper[4746]: I0129 17:33:19.065489 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:33:19 crc kubenswrapper[4746]: I0129 17:33:19.065832 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:33:30 crc kubenswrapper[4746]: E0129 17:33:30.447689 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.961618 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vbnj6"] Jan 29 17:33:30 crc kubenswrapper[4746]: E0129 17:33:30.961959 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="extract-content" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.961981 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="extract-content" Jan 29 17:33:30 crc kubenswrapper[4746]: E0129 17:33:30.962002 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="extract-utilities" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.962011 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="extract-utilities" Jan 29 17:33:30 crc kubenswrapper[4746]: E0129 17:33:30.962033 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="registry-server" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.962041 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="registry-server" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.962287 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6edd6557-0b51-40fe-86bb-8f1f3af55b10" containerName="registry-server" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.963282 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:30 crc kubenswrapper[4746]: I0129 17:33:30.978352 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbnj6"] Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.013449 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-utilities\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.013769 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtv87\" (UniqueName: \"kubernetes.io/projected/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-kube-api-access-rtv87\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.013938 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-catalog-content\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.115079 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-catalog-content\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.115214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-utilities\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.115258 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtv87\" (UniqueName: \"kubernetes.io/projected/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-kube-api-access-rtv87\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.115763 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-catalog-content\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.116114 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-utilities\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.136720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtv87\" (UniqueName: \"kubernetes.io/projected/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-kube-api-access-rtv87\") pod \"redhat-marketplace-vbnj6\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.295219 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:31 crc kubenswrapper[4746]: I0129 17:33:31.740140 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbnj6"] Jan 29 17:33:32 crc kubenswrapper[4746]: I0129 17:33:32.523010 4746 generic.go:334] "Generic (PLEG): container finished" podID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerID="eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4" exitCode=0 Jan 29 17:33:32 crc kubenswrapper[4746]: I0129 17:33:32.523052 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerDied","Data":"eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4"} Jan 29 17:33:32 crc kubenswrapper[4746]: I0129 17:33:32.523301 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerStarted","Data":"d04426c3838ebdbb57bb400787dc8c03be574b289790572b090bbda592ead75d"} Jan 29 17:33:32 crc kubenswrapper[4746]: I0129 17:33:32.525658 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:33:33 crc kubenswrapper[4746]: I0129 17:33:33.531326 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerStarted","Data":"2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541"} Jan 29 17:33:34 crc kubenswrapper[4746]: I0129 17:33:34.543510 4746 generic.go:334] "Generic (PLEG): container finished" podID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerID="2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541" exitCode=0 Jan 29 17:33:34 crc kubenswrapper[4746]: I0129 17:33:34.543561 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerDied","Data":"2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541"} Jan 29 17:33:35 crc kubenswrapper[4746]: I0129 17:33:35.553569 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerStarted","Data":"e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a"} Jan 29 17:33:35 crc kubenswrapper[4746]: I0129 17:33:35.574229 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vbnj6" podStartSLOduration=2.907449699 podStartE2EDuration="5.574213203s" podCreationTimestamp="2026-01-29 17:33:30 +0000 UTC" firstStartedPulling="2026-01-29 17:33:32.525132049 +0000 UTC m=+3534.925716693" lastFinishedPulling="2026-01-29 17:33:35.191895533 +0000 UTC m=+3537.592480197" observedRunningTime="2026-01-29 17:33:35.570569265 +0000 UTC m=+3537.971153919" watchObservedRunningTime="2026-01-29 17:33:35.574213203 +0000 UTC m=+3537.974797847" Jan 29 17:33:41 crc kubenswrapper[4746]: I0129 17:33:41.296777 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:41 crc kubenswrapper[4746]: I0129 17:33:41.297264 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:41 crc kubenswrapper[4746]: I0129 17:33:41.370133 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:41 crc kubenswrapper[4746]: I0129 17:33:41.673722 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:41 crc kubenswrapper[4746]: I0129 17:33:41.731005 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbnj6"] Jan 29 17:33:43 crc kubenswrapper[4746]: I0129 17:33:43.617515 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vbnj6" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="registry-server" containerID="cri-o://e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a" gracePeriod=2 Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.105697 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.257878 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtv87\" (UniqueName: \"kubernetes.io/projected/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-kube-api-access-rtv87\") pod \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.257968 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-utilities\") pod \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.257997 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-catalog-content\") pod \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\" (UID: \"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423\") " Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.259135 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-utilities" (OuterVolumeSpecName: "utilities") pod "2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" (UID: "2f0cb41b-7777-46d6-ae6b-34fcb3eb9423"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.294443 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-kube-api-access-rtv87" (OuterVolumeSpecName: "kube-api-access-rtv87") pod "2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" (UID: "2f0cb41b-7777-46d6-ae6b-34fcb3eb9423"). InnerVolumeSpecName "kube-api-access-rtv87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.344010 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" (UID: "2f0cb41b-7777-46d6-ae6b-34fcb3eb9423"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.359912 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtv87\" (UniqueName: \"kubernetes.io/projected/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-kube-api-access-rtv87\") on node \"crc\" DevicePath \"\"" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.359960 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.359973 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:33:44 crc kubenswrapper[4746]: E0129 17:33:44.451486 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.625852 4746 generic.go:334] "Generic (PLEG): container finished" podID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerID="e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a" exitCode=0 Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.625909 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbnj6" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.625915 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerDied","Data":"e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a"} Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.626389 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbnj6" event={"ID":"2f0cb41b-7777-46d6-ae6b-34fcb3eb9423","Type":"ContainerDied","Data":"d04426c3838ebdbb57bb400787dc8c03be574b289790572b090bbda592ead75d"} Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.626425 4746 scope.go:117] "RemoveContainer" containerID="e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.646437 4746 scope.go:117] "RemoveContainer" containerID="2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.648109 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbnj6"] Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.656670 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbnj6"] Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.673315 4746 scope.go:117] "RemoveContainer" containerID="eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.702906 4746 scope.go:117] "RemoveContainer" containerID="e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a" Jan 29 17:33:44 crc kubenswrapper[4746]: E0129 17:33:44.703442 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a\": container with ID starting with e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a not found: ID does not exist" containerID="e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.703481 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a"} err="failed to get container status \"e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a\": rpc error: code = NotFound desc = could not find container \"e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a\": container with ID starting with e94f340c41259cef5835655e197086d3629fba126e4eb9dbca3bcc9e14f6e38a not found: ID does not exist" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.703511 4746 scope.go:117] "RemoveContainer" containerID="2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541" Jan 29 17:33:44 crc kubenswrapper[4746]: E0129 17:33:44.703947 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541\": container with ID starting with 2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541 not found: ID does not exist" containerID="2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.703990 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541"} err="failed to get container status \"2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541\": rpc error: code = NotFound desc = could not find container \"2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541\": container with ID starting with 2e9614bbb0ae496ae56a7f40c53c8cebfddf17feddadfe4e3907e004dea94541 not found: ID does not exist" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.704020 4746 scope.go:117] "RemoveContainer" containerID="eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4" Jan 29 17:33:44 crc kubenswrapper[4746]: E0129 17:33:44.704394 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4\": container with ID starting with eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4 not found: ID does not exist" containerID="eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4" Jan 29 17:33:44 crc kubenswrapper[4746]: I0129 17:33:44.704423 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4"} err="failed to get container status \"eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4\": rpc error: code = NotFound desc = could not find container \"eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4\": container with ID starting with eb6dcc93e8f2ac5f74f0795c162c0f58ff70d0e26ddfaa9943e19c0e640bddc4 not found: ID does not exist" Jan 29 17:33:46 crc kubenswrapper[4746]: I0129 17:33:46.455373 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" path="/var/lib/kubelet/pods/2f0cb41b-7777-46d6-ae6b-34fcb3eb9423/volumes" Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.065237 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8vzgw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.065594 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.065640 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.066291 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676"} pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.066359 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerName="machine-config-daemon" containerID="cri-o://56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" gracePeriod=600 Jan 29 17:33:49 crc kubenswrapper[4746]: E0129 17:33:49.213869 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.666745 4746 generic.go:334] "Generic (PLEG): container finished" podID="c20d2bd9-a984-476f-855f-6a0365ccdab7" containerID="56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" exitCode=0 Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.666783 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" event={"ID":"c20d2bd9-a984-476f-855f-6a0365ccdab7","Type":"ContainerDied","Data":"56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676"} Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.666848 4746 scope.go:117] "RemoveContainer" containerID="43dba09b67eb01818ddc17ff2d11e06359f9a659fcd0477355e15b7687688ca2" Jan 29 17:33:49 crc kubenswrapper[4746]: I0129 17:33:49.667452 4746 scope.go:117] "RemoveContainer" containerID="56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" Jan 29 17:33:49 crc kubenswrapper[4746]: E0129 17:33:49.667866 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:33:58 crc kubenswrapper[4746]: E0129 17:33:58.452409 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" Jan 29 17:34:00 crc kubenswrapper[4746]: I0129 17:34:00.446411 4746 scope.go:117] "RemoveContainer" containerID="56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" Jan 29 17:34:00 crc kubenswrapper[4746]: E0129 17:34:00.447120 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:34:11 crc kubenswrapper[4746]: I0129 17:34:11.848974 4746 generic.go:334] "Generic (PLEG): container finished" podID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerID="442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf" exitCode=0 Jan 29 17:34:11 crc kubenswrapper[4746]: I0129 17:34:11.849169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7njw" event={"ID":"dce72e0a-3332-43c6-b3fd-e503bd7a2849","Type":"ContainerDied","Data":"442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf"} Jan 29 17:34:12 crc kubenswrapper[4746]: I0129 17:34:12.861888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7njw" event={"ID":"dce72e0a-3332-43c6-b3fd-e503bd7a2849","Type":"ContainerStarted","Data":"d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77"} Jan 29 17:34:12 crc kubenswrapper[4746]: I0129 17:34:12.887832 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l7njw" podStartSLOduration=4.601896024 podStartE2EDuration="10m56.887814378s" podCreationTimestamp="2026-01-29 17:23:16 +0000 UTC" firstStartedPulling="2026-01-29 17:23:20.22642911 +0000 UTC m=+2922.627013764" lastFinishedPulling="2026-01-29 17:34:12.512347454 +0000 UTC m=+3574.912932118" observedRunningTime="2026-01-29 17:34:12.887775107 +0000 UTC m=+3575.288359791" watchObservedRunningTime="2026-01-29 17:34:12.887814378 +0000 UTC m=+3575.288399042" Jan 29 17:34:15 crc kubenswrapper[4746]: I0129 17:34:15.445477 4746 scope.go:117] "RemoveContainer" containerID="56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" Jan 29 17:34:15 crc kubenswrapper[4746]: E0129 17:34:15.445818 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:34:18 crc kubenswrapper[4746]: I0129 17:34:18.304885 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:34:18 crc kubenswrapper[4746]: I0129 17:34:18.305385 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:34:18 crc kubenswrapper[4746]: I0129 17:34:18.345327 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:34:18 crc kubenswrapper[4746]: I0129 17:34:18.994986 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:34:19 crc kubenswrapper[4746]: I0129 17:34:19.066434 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7njw"] Jan 29 17:34:20 crc kubenswrapper[4746]: I0129 17:34:20.925312 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l7njw" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="registry-server" containerID="cri-o://d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77" gracePeriod=2 Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.347623 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.451775 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-catalog-content\") pod \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.451932 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-utilities\") pod \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.451962 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6nh9\" (UniqueName: \"kubernetes.io/projected/dce72e0a-3332-43c6-b3fd-e503bd7a2849-kube-api-access-j6nh9\") pod \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\" (UID: \"dce72e0a-3332-43c6-b3fd-e503bd7a2849\") " Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.453988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-utilities" (OuterVolumeSpecName: "utilities") pod "dce72e0a-3332-43c6-b3fd-e503bd7a2849" (UID: "dce72e0a-3332-43c6-b3fd-e503bd7a2849"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.457998 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce72e0a-3332-43c6-b3fd-e503bd7a2849-kube-api-access-j6nh9" (OuterVolumeSpecName: "kube-api-access-j6nh9") pod "dce72e0a-3332-43c6-b3fd-e503bd7a2849" (UID: "dce72e0a-3332-43c6-b3fd-e503bd7a2849"). InnerVolumeSpecName "kube-api-access-j6nh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.550222 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dce72e0a-3332-43c6-b3fd-e503bd7a2849" (UID: "dce72e0a-3332-43c6-b3fd-e503bd7a2849"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.553585 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.553635 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce72e0a-3332-43c6-b3fd-e503bd7a2849-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.553650 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6nh9\" (UniqueName: \"kubernetes.io/projected/dce72e0a-3332-43c6-b3fd-e503bd7a2849-kube-api-access-j6nh9\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.936647 4746 generic.go:334] "Generic (PLEG): container finished" podID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerID="d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77" exitCode=0 Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.936710 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l7njw" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.936706 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7njw" event={"ID":"dce72e0a-3332-43c6-b3fd-e503bd7a2849","Type":"ContainerDied","Data":"d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77"} Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.936833 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l7njw" event={"ID":"dce72e0a-3332-43c6-b3fd-e503bd7a2849","Type":"ContainerDied","Data":"95bb6ab11f6892e7a44b845e78a49d9d0cd2f0ec617fc86a82aef3aae88d6410"} Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.936853 4746 scope.go:117] "RemoveContainer" containerID="d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.966622 4746 scope.go:117] "RemoveContainer" containerID="442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf" Jan 29 17:34:21 crc kubenswrapper[4746]: I0129 17:34:21.996367 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l7njw"] Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.009143 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l7njw"] Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.024965 4746 scope.go:117] "RemoveContainer" containerID="853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036148 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6kjkj"] Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.036560 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="extract-content" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036592 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="extract-content" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.036610 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="extract-utilities" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036618 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="extract-utilities" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.036631 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="registry-server" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036639 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="registry-server" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.036655 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="extract-utilities" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036663 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="extract-utilities" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.036680 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="extract-content" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036687 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="extract-content" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.036696 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="registry-server" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036704 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="registry-server" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036894 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0cb41b-7777-46d6-ae6b-34fcb3eb9423" containerName="registry-server" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.036918 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" containerName="registry-server" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.039535 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.060453 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46n2b\" (UniqueName: \"kubernetes.io/projected/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-kube-api-access-46n2b\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.060537 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-catalog-content\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.060654 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-utilities\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.062258 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6kjkj"] Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.066396 4746 scope.go:117] "RemoveContainer" containerID="d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.066902 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77\": container with ID starting with d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77 not found: ID does not exist" containerID="d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.066947 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77"} err="failed to get container status \"d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77\": rpc error: code = NotFound desc = could not find container \"d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77\": container with ID starting with d449166b5ee7d213222cd27d85ad796e026150f0d36522c6f8ebe1145269fe77 not found: ID does not exist" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.066978 4746 scope.go:117] "RemoveContainer" containerID="442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.067311 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf\": container with ID starting with 442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf not found: ID does not exist" containerID="442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.067360 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf"} err="failed to get container status \"442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf\": rpc error: code = NotFound desc = could not find container \"442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf\": container with ID starting with 442e43dd18caf6e8e6cdf56dae9d06a720eec717159d4635131b5b8df4684acf not found: ID does not exist" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.067392 4746 scope.go:117] "RemoveContainer" containerID="853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c" Jan 29 17:34:22 crc kubenswrapper[4746]: E0129 17:34:22.067671 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c\": container with ID starting with 853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c not found: ID does not exist" containerID="853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.067692 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c"} err="failed to get container status \"853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c\": rpc error: code = NotFound desc = could not find container \"853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c\": container with ID starting with 853d7a6ea46a255ec97dbd81fb66526144118bfcb70aee631983a23beb44061c not found: ID does not exist" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.161526 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46n2b\" (UniqueName: \"kubernetes.io/projected/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-kube-api-access-46n2b\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.161580 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-catalog-content\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.161641 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-utilities\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.162064 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-utilities\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.162282 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-catalog-content\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.191060 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46n2b\" (UniqueName: \"kubernetes.io/projected/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-kube-api-access-46n2b\") pod \"community-operators-6kjkj\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.381213 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.455432 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce72e0a-3332-43c6-b3fd-e503bd7a2849" path="/var/lib/kubelet/pods/dce72e0a-3332-43c6-b3fd-e503bd7a2849/volumes" Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.640887 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6kjkj"] Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.946684 4746 generic.go:334] "Generic (PLEG): container finished" podID="15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" containerID="2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0" exitCode=0 Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.946844 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerDied","Data":"2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0"} Jan 29 17:34:22 crc kubenswrapper[4746]: I0129 17:34:22.947085 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerStarted","Data":"83047e585317419f3c15f0b91e5857373785a03536ccf57bc0737352d4c06e50"} Jan 29 17:34:23 crc kubenswrapper[4746]: I0129 17:34:23.956819 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerStarted","Data":"0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407"} Jan 29 17:34:24 crc kubenswrapper[4746]: I0129 17:34:24.968448 4746 generic.go:334] "Generic (PLEG): container finished" podID="15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" containerID="0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407" exitCode=0 Jan 29 17:34:24 crc kubenswrapper[4746]: I0129 17:34:24.968499 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerDied","Data":"0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407"} Jan 29 17:34:25 crc kubenswrapper[4746]: I0129 17:34:25.978442 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerStarted","Data":"914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1"} Jan 29 17:34:26 crc kubenswrapper[4746]: I0129 17:34:26.000735 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6kjkj" podStartSLOduration=1.559244428 podStartE2EDuration="4.000713371s" podCreationTimestamp="2026-01-29 17:34:22 +0000 UTC" firstStartedPulling="2026-01-29 17:34:22.94911113 +0000 UTC m=+3585.349695774" lastFinishedPulling="2026-01-29 17:34:25.390580073 +0000 UTC m=+3587.791164717" observedRunningTime="2026-01-29 17:34:25.994250916 +0000 UTC m=+3588.394835580" watchObservedRunningTime="2026-01-29 17:34:26.000713371 +0000 UTC m=+3588.401298025" Jan 29 17:34:27 crc kubenswrapper[4746]: I0129 17:34:27.446105 4746 scope.go:117] "RemoveContainer" containerID="56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" Jan 29 17:34:27 crc kubenswrapper[4746]: E0129 17:34:27.446930 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" Jan 29 17:34:32 crc kubenswrapper[4746]: I0129 17:34:32.382030 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:32 crc kubenswrapper[4746]: I0129 17:34:32.382684 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:32 crc kubenswrapper[4746]: I0129 17:34:32.462497 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:33 crc kubenswrapper[4746]: I0129 17:34:33.085957 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:33 crc kubenswrapper[4746]: I0129 17:34:33.135755 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6kjkj"] Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.059126 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6kjkj" podUID="15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" containerName="registry-server" containerID="cri-o://914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1" gracePeriod=2 Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.446106 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.553648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-catalog-content\") pod \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.553692 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-utilities\") pod \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.553712 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46n2b\" (UniqueName: \"kubernetes.io/projected/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-kube-api-access-46n2b\") pod \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\" (UID: \"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef\") " Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.554730 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-utilities" (OuterVolumeSpecName: "utilities") pod "15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" (UID: "15de58c3-0ed7-4a13-b0d2-9f31ceb681ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.560150 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-kube-api-access-46n2b" (OuterVolumeSpecName: "kube-api-access-46n2b") pod "15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" (UID: "15de58c3-0ed7-4a13-b0d2-9f31ceb681ef"). InnerVolumeSpecName "kube-api-access-46n2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.600955 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" (UID: "15de58c3-0ed7-4a13-b0d2-9f31ceb681ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.655778 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.656048 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:35 crc kubenswrapper[4746]: I0129 17:34:35.656152 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46n2b\" (UniqueName: \"kubernetes.io/projected/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef-kube-api-access-46n2b\") on node \"crc\" DevicePath \"\"" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.072926 4746 generic.go:334] "Generic (PLEG): container finished" podID="15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" containerID="914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1" exitCode=0 Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.072994 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerDied","Data":"914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1"} Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.073054 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6kjkj" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.074448 4746 scope.go:117] "RemoveContainer" containerID="914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.074432 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6kjkj" event={"ID":"15de58c3-0ed7-4a13-b0d2-9f31ceb681ef","Type":"ContainerDied","Data":"83047e585317419f3c15f0b91e5857373785a03536ccf57bc0737352d4c06e50"} Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.100390 4746 scope.go:117] "RemoveContainer" containerID="0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.120420 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6kjkj"] Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.128845 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6kjkj"] Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.139108 4746 scope.go:117] "RemoveContainer" containerID="2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.161062 4746 scope.go:117] "RemoveContainer" containerID="914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1" Jan 29 17:34:36 crc kubenswrapper[4746]: E0129 17:34:36.161814 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1\": container with ID starting with 914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1 not found: ID does not exist" containerID="914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.161869 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1"} err="failed to get container status \"914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1\": rpc error: code = NotFound desc = could not find container \"914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1\": container with ID starting with 914c46ec3cf928a7e213a2189a4e454091d60252f4674d507f47545bbc6ea1e1 not found: ID does not exist" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.161912 4746 scope.go:117] "RemoveContainer" containerID="0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407" Jan 29 17:34:36 crc kubenswrapper[4746]: E0129 17:34:36.162560 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407\": container with ID starting with 0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407 not found: ID does not exist" containerID="0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.162591 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407"} err="failed to get container status \"0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407\": rpc error: code = NotFound desc = could not find container \"0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407\": container with ID starting with 0ed22aaa16254ed7da56cce28ed4792353b9728b0009ffdab78ebb8239fd3407 not found: ID does not exist" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.162607 4746 scope.go:117] "RemoveContainer" containerID="2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0" Jan 29 17:34:36 crc kubenswrapper[4746]: E0129 17:34:36.162999 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0\": container with ID starting with 2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0 not found: ID does not exist" containerID="2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.163050 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0"} err="failed to get container status \"2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0\": rpc error: code = NotFound desc = could not find container \"2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0\": container with ID starting with 2493927f068f247b0073258c8e75a1ed27faa2a6319208b34cd8c76d61974be0 not found: ID does not exist" Jan 29 17:34:36 crc kubenswrapper[4746]: I0129 17:34:36.453890 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15de58c3-0ed7-4a13-b0d2-9f31ceb681ef" path="/var/lib/kubelet/pods/15de58c3-0ed7-4a13-b0d2-9f31ceb681ef/volumes" Jan 29 17:34:42 crc kubenswrapper[4746]: I0129 17:34:42.446561 4746 scope.go:117] "RemoveContainer" containerID="56ac3d974ce501e8fc39ffbcb668acdfc740796d21823642db74c77b399b7676" Jan 29 17:34:42 crc kubenswrapper[4746]: E0129 17:34:42.447504 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8vzgw_openshift-machine-config-operator(c20d2bd9-a984-476f-855f-6a0365ccdab7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8vzgw" podUID="c20d2bd9-a984-476f-855f-6a0365ccdab7" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136715102024446 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136715103017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136705457016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136705457015472 5ustar corecore